Development of Autonomous Vehicles

History of Development of Autonomous Robots

2.1.1 Introduction

The human urge to automate our world seems unstoppable. Kassler (2001) claims the transfer of human intelligence into computer-controlled machines such as robots is analogous to the fundamental scientific aim to devise theories in a form that makes them reproducible. In essence, we transfer factual data and procedural theories into a computer so that the machine can carry out humanlike tasks.

The emphasis in the development of autonomous field robots is currently on speed, energy efficiency, sensors for guidance, guidance accuracy, and technologies such as wireless communication and Global Positioning System (GPS) (Grift, 2015).

2.1.2 Autonomous Robots

Both animals and robots manipulate objects in their environment to achieve certain goals. Animals use their senses (e.g. vision, touch, smell) to probe the environment. The resulting information, in many cases also enhanced by the information available from internal states (based on short-term or long-term memory), is processed in the brain, often resulting in an action carried out by the animal, with the use of its limbs.

Similarly, robots gain information on their surroundings using their sensors. The information is processed in the robot’s brain, consisting of one or several processors, resulting in motor signals being sent to the actuators (e.g., motors) of the robot.

A robotic brain cannot operate in isolation. It needs sensory inputs, and it must produce motor output to influence objects in the environment. Thus, the main challenge in contemporary robotics is the development of robotic brains. However, the actual hardware, i.e., sensors, processors, motors etc., is certainly very important as well (Wahde, 2016).

2.1.2.1 History and Development

The Seekur robot was the first commercially available robot with mobile detection assessment and response system (MDARS) capabilities and was used by airports, utility plants, corrections facilities, and Homeland Security.

The Mars rovers MER-A and MER-B (now known as Spirit Rover and Opportunity Rover) can find the position of the sun and navigate their own routes to destinations by:

  • • Mapping the surface with 3D vision
  • • Computing safe and unsafe areas on the surface within the field of vision
  • • Computing optimal paths across the safe area towards the desired destination
  • • Driving along the calculated route
  • • Repeating this cycle until either the destination is reached or there is no known path to the destination.

The planned ESA Rover, ExoMars Rover, is capable of vision-based relative localization and absolute localization to autonomously navigate safe and efficient trajectories to targets by:

  • • Reconstructing 3D models of the terrain surrounding the Rover using a pair of stereo cameras
  • • Determining safe and unsafe areas of the terrain and the general “difficulty” for the Rover to navigate the terrain
  • • Computing efficient paths across the safe area towards the desired destination
  • • Driving the Rover along the planned path
  • • Building up a navigation map of all previous navigation data.

During the final NASA Sample Return Robot Centennial Challenge in 2016, a rover named Cataglyphis demonstrated fully autonomous navigation, decision-making, and sample detection, retrieval, and return capabilities (Hall, 2016). The rover relied on a fusion of measurements from inertial sensors, wheel encoders, light detection and ranging (LiDAR), and cameras for navigation and mapping, instead of using GPS or magnetometers. During the 2h challenge, Cataglyphis traveled over 2.6km and returned five different samples to its starting position.

The Defense Advanced Research Projects Agency (DARPA) Grand Challenge and the DARPA Urban Challenge have encouraged development of even more autonomous capabilities for ground vehicles, also the goal for aerial robots since 1990 as part of the AUVSI International Aerial Robotics Competition.

Between 2013 and 2017, Total S.A. held the ARGOS Challenge to develop the first autonomous robot for oil and gas production sites. In the Challenge, the robots faced adverse outdoor conditions, such as rain, wind, and extreme temperatures (Total, 2015).

2.1.2.1.1 Delivery Robot

A delivery robot is an autonomous robot used to deliver goods. As of February 2017, the following companies were developing delivery robots (some with pilot deliveries in progress):

  • • Starship Technologies
  • • Dispatch
  • • Marble.
  • 2.1.3 What Are Autonomous Robots?

An autonomous robot is designed and engineered to deal with its environment on its own and work for extended periods of time without human intervention. Autonomous robots often have sophisticated features that help them understand their physical environment and automate parts of their maintenance and direction that used to be done by human hands.

Autonomous robots typically go about their work without any human interaction unless that human interaction is necessary as part of their task. Many of these robots have sensors and other functional gear that helps them see any obstacles in their way or navigate rooms, hallways, or some environments. Complex delivery robots can even be programmed to use elevators and move throughout a multistory building with complete autonomy. However, autonomous robots still need to be maintained physically (Techopedia, 2019).

The working definition of an autonomous robot is the following: a robot is autonomous if it has the computational resources—in terms of both hardware and software—other than real-time interference from a human agent, to estimate how it is physically embedded in the environment to compute the best possible actions bounded by some constraints to perceive and move, if needed, to achieve a set of goals. According to this working definition, a robot’s ability to estimate its current state (how it is physically embedded in the environment) is an essential component of autonomy. It has to have adequate computational resources at its disposal to take an action within bounds, to perceive the environment, and move, if needed, to achieve a given goal.

An autonomous agent should be able to act with the environment, so that with its embedded passive and active sensors, it can perceive and operate effectively.

The computational resources required are the following:

  • 1. Hardware: Every embedded system needs a microprocessor. There is a variety of options for autonomous robots, from microcontrollers like programmable interrupt controllers (PICs) and automatic voltage regulators (AVRs) to microprocessors like Advanced RISC Machines (ARMs). Their selection may depend on the complexity of the automotive algorithms required.
  • 2. Software: A well-performing robot needs to perform many functions simultaneously. A customized operation system (OS) might handle this. Most are versions of Linux, like ROS and RoBIOS. They can handle common tasks in robotics, such as machine vision and sensor interpretation.
  • 2.1.4 Types of Autonomous Vehicles

Various kinds of autonomous vehicles (AVs) can operate with varying levels of autonomy. This section is concerned with underwater, ground, and aerial vehicles operating in a fully autonomous (non-tele-operated) mode. It also deals with AVs as a special kind of device, not with full-scale manned vehicles operating unmanned. The AV in question is likely to be designed for autonomous operation rather than being adapted for it, as would be the case for manned vehicles.

It should be noted that issues of control are pervasive regardless of the kind of AV being considered, but there are special considerations in the design and operation of AVs depending on whether the focus is on vehicles underwater, on the ground, or in the air (Meyrowitz, Blidberg, & Michelson, 1996).

2.1.4.1 Autonomous Underwater Vehicles

The development of autonomous underwater vehicle (AUV) systems began many years ago. Most of the original vehicles were extremely simple because of technological limitations, but their potential to serve both military and scientific purposes was apparent. For example, the Navy had requirements for drones, especially devices to meet search and survey needs, support mine countermeasures, and assist in understanding control and hydrodynamics questions. The need to understand the physics of the ocean enticed some of the ocean science community to begin work to develop sophisticated vehicle systems for data gathering; the oil and gas industry also had special interests in the use of AUV technology for the inspection of underwater structures and pipelines.

During the past two decades, the pace of AUV development has increased substantially. A number of recent efforts have been undertaken to determine, in a comprehensive way, the appropriate role for this technology in different application areas. In the United States, the academic groups addressing AUV technology include the University of Hawaii, Scripps, Stanford, University of California at Santa Barbara, FAU, Texas A&M, Massachusetts Institute of Technology, Woods Hole Oceanographic Institute (HOI), US National Parks Service (NPS), and a number of other universities. Two key issues have emerged: energy systems and high-level control. Without sufficient energy, nothing can be accomplished, and with minimal or possible suspension of communications, the demand for onboard decision-making becomes critical.

Industry plays an active role in planning the use of AUV technology, with applications being considered for deep ocean exploration, polar ocean exploration (the ice cover makes the use of standard technology impossible), and for the military or hazardous environments (e.g., areas of high-level chemical or radiation hazards). Furthermore, industry and government agencies alike may find AUVs are an attractive option for understanding and monitoring the health of the environment. Remote sensing from space-based satellites and from airplanes goes only so far in understanding the impact of the ocean and its estuaries on our environment.

As evidenced by the increasing attendance at the annual IEEE AUV Symposia and the biannual International Symposia on Unmanned Untethered Submersible Technology (ISUUST), there is a tremendous and growing international interest in AUV technology. Further, it is clear from the presented papers that the maturity of AUV technology is mostly in the hardware domain. It is now a nearly routine undertaking to design a hull structure to some set of operational parameters, construct the hull, populate it with sensors, effectors, computers, and batteries, derive a control algorithm, implement and install that algorithm on board, and effect a self-controlling vehicle. The most constraining physical parameter is the amount of energy that can be carried on board. New energy systems are being developed, however, which will shortly reduce the effect of this constraint.

One example of an experimental underwater vehicle is EAVE 111, developed by the Marine Systems Engineering Laboratory (MSEL) of the Autonomous Underwater Systems Institute (AUSI). EAVE 111 is a third-generation AUV characterized as an open-space frame test bed with excellent maneuverability, precise control, and an acoustic long baseline navigation system. Experiments include searches for underwater objects, navigation below an oil spill boom, acquisition of video images of undersea objects, and acoustic communication between two EAVE Ills for the purpose of demonstrating cooperative behavior and control between two autonomous systems.

R-One is an autonomous underwater free swimming robot equipped with a closed cycle diesel engine for long-term survey of mid-ocean ridges. An inertial navigation system (INS) cooperates with a Doppler sonar system to support accurate navigation when the robot swims in the vicinity of the seabed (see Table 2.1) (Meyrowitz, Blidberg, & Michelson, 1996).

Over a dozen different sensor payloads are being designed. These include a variety of water quality, optic, and acoustic sensors. The goal is modularity and rapid reconfiguration. An interface specification facilitates collaboration with other institutions with interests in building payloads for deployment.

The state of AUV software development is less mature. Architectures for vehicle software abound, but a significant factor in the acceptance of autonomous systems technology is the lack of real in-water experience to validate the potential utility of the alternatives. As discussed below, much work remains to be done in a number of enabling technologies, especially understanding the design and implementation of software to support navigation, communication, and the AUV’s response to changing conditions in its internal state and in its environment (Meyrowitz, Blidberg, & Michelson, 1996).

TABLE 2.1

Data from the R-One Underwater Robot (Meyrowitz, Blidberg, & Michelson, 1996)

R-One Underwater Robot (Tamaki Ura)

Species

Cruising Type

Mission

Long time diving survey

Launching date

11/1995

Dimension (L x diameter)

8.27 x 1.15 m0

Mass

4,530 kg

Design depth

400 m

CPUs

2 x PEP-9000 VM40 (MC68040. 25 MHz)

Main thrusters

1.5 kW. 280 DVC

Vertical thrusters

2 x 0.75 kW. 280 DVC

Actuators

3x0.17 kW DC motor

Energy source

Closed cycle diesel engine system 5kW, DC 280 V, 60 kWh 1,900 kg

Navigation

INS with Doppler sonar

Sensors

Depth gauge 2 x bottom profiler Transponder link Radio link Radio link CTDO TV camera

Wet payload space

600 L

2.1.4.1.1 Control

The purpose of an AUV’s control software is the same as that of any other AV: to allow the vehicle to sense, move about in, and interact with its world, to survive, and to carry out useful missions for its users. The controller can be thought of as a resource manager for the AUV, managing its sensor and effector use, its power consumption, its location, and its time to carry out missions for its users.

A basic function of an AUV’s control software is managing its sensors. This includes determining what to “look” at, as well as deciding which sensors to have active. It also involves detecting and, if possible, correcting sensor errors, possibly by using redundant sensors or taking faulty sensors off-line. Sensor fusion is often necessary to provide “virtual sensors.” For example, bottom topology may be important to a mission, yet there is no “bottom topology” sensor; rather, this virtual sensor is created by fusing information from (possibly) down-looking sonar, location sensors, motion sensors, etc.

Control also involves managing the AUV’s effectors, including its thrusters or other means of movement. At a low level, the controller must ensure the effectors are operating properly and, if not, that steps are taken to correct or compensate for the problem. The controller may need to create “virtual effectors” by coordinating the activities of several real effectors.

Survival of the vehicle is an obvious responsibility of the vehicle controller. This includes not only internal homeostasis (e.g., ensuring a constant internal temperature or power consumption rate) and recovery from faults but also taking actions to maintain the current status in the world. For example, the AUV may need to maintain its station near a mine or hydrothermal vent in the presence of currents; failure to do so could result in mission failure and loss of the vehicle.

Finally, control software is responsible for providing a usable vehicle with which users can conduct missions. The level of intelligence aboard the vehicle can vary, depending on the vehicle and the users’ needs. For some missions, for example, simple missions or those taking place in relatively static, well-known environments, it may be sufficient to have a relatively “dumb” AUV; the user specifies the mission in detail, and the AUV carries out the instructions to the letter. For other missions, for example, complicated missions, missions taking place in dynamic or uncertain environments, or missions that cannot be preplanned completely, it is advantageous to migrate some or all of the responsibility of planning from the user to the AUV’s controller (Meyrowitz, Blidberg, & Michelson, 1996).

2.1.4.2 Autonomous Ground Vehicles

There are special challenges in building mobile vehicles capable of navigating through unknown potentially hazardous terrain. This is highly nontrivial even under the assumption of control with a human in the loop. Tele-operated vehicles, i.e., under fiber optic tether, have been demonstrated to transit over unmapped natural terrain, but this accomplishment has required extensive research addressing such issues as creating a sense of telepresence through a human-machine interface, highspeed mobility, long-range non-line-of-sight operation, ruggedness, and reliability (Aviles, 1990). If we wish to have a vehicle performing autonomously, a host of new research issues must be addressed to allow perceptual and problem-solving capabilities to reside entirely within the machine (Meyrowitz, Blidberg, & Michelson, 1996).

The autonomous ground vehicle (AGV) must be a highly competent problem solver to operate in natural and remote environments. We may not be able to provide it with accurate models of terrain. If maps are available, they may lack local detail and, in any case, will not represent the changes that can occur in dynamic situations where transient obstacles (such as other mobile vehicles and humans) will be encountered or where activities of the AGV itself might alter its environment during accomplishment of tasks. On the basis of information obtained from its sensors, the AGV must be capable of building its own maps to represent the environment and flexibly support its own reasoning for navigational planning and, if necessary, replanning. Moreover, as the world around the vehicle can change quickly, we want the software which implements control to be computationally efficient and supportive of real-time responses to events (Meyrowitz, Blidberg, & Michelson, 1996).

2.1.4.2.1 Progress Towards Autonomy

The wheeled robot SHAKEY (Meystel, 1991) is often cited as an early demonstration of automating perception and path planning. Through a combination of onboard computers and a radio link to larger computers elsewhere, SHAKEY used a scanning camera to obtain a wide field of view and attempted to keep track of its wheel rotation to support the calculation of its position on its internal map. The experiments with SHAKEY highlighted the need for research into more powerful computer vision, the integration of information from multiple sensors (such as visual and mechanical), and automated planning. SHAKEY also provided an early example of hierarchical control, an important architectural principle in the design of robotic software (Meyrowitz, Blidberg, & Michelson, 1996).

By the early 1980s, the Stanford Cart and the CMU Rover represented state-of the-art robotic mobility. The Stanford Cart was basically a camera platform on wheels; images broadcast from its on-board TV system provided knowledge of its surroundings as it moved through cluttered spaces. That movement was slow (1 m every 10-15 min) and not continuous. The Cart would stop after each meter to take and review new pictures and to plan a new path before moving on.

The CMU Rover was a more capable device designed to support a broader range of experiments in perception and control. Rapid processing of sensed data was facilitated by a dozen onboard processors and by a connection to a large remote computer; an omnidirectional steering system provided maximum mechanical flexibility. The cylindrical Rover, 1 m tall and a half-meter in diameter, was designed to achieve motion at ten times the speed of the Stanford Cart. Hierarchical control was also used, with three processing levels focused on planning, plan execution, and direction of actuators and sensors (Meyrowitz, Blidberg, & Michelson, 1996).

Even lacking planning capabilities, a robot fitted with a variety of sensors and the ability to wander about can serve as a sentry. This has been shown in the Robart-I and Robart-II mobile robots built at the Naval Postgraduate School and at the University of Pennsylvania’s Moore School of Engineering. In particular, Robart-I could alter the direction of its motion if an obstacle was sensed; it relied on a combination of ultrasonic waves (to measure forward range), near-infrared proximity detectors, tactile feelers, and bumper switches. Robart-II, in keeping with its sentry mission, employed a microwave motion detector, a smoke detector, infrared sensors, an ambient temperature sensor, and physical contact sensors. Its sentry duties were supported by an ability to move at a fixed sensed distance from nearby walls and to recognize intersections by the disappearance of those walls (Meyrowitz, Blidberg, & Michelson, 1996).

2.1.4.3 Autonomous Air Vehicles

To date, there are few fully autonomous unmanned aerial vehicles (UAVs), much less ones slated for service. A notable exception is the cruise missile which can navigate between points using environmental cues with augmentation from other sources such as the GPS system. Cruise missiles exhibit moderate intelligence; however, a greater degree of machine intelligence (MI) can be envisioned in which the air vehicle interacts with its environment to modify its tactics to achieve a goal. In Japan, Sugeno has demonstrated limited use of fuzzy logic in tele-operated UAVs (Sugeno, Griffin, & Bastian, 1993). On a larger scale, but in simulation only, Gilmore, Roth, and du Fossat have demonstrated self-actuating behaviors and postulated methods for interactions among several fully autonomous UAVs operating in concert (Gilmore, Roth, & du Fossat, 1990).

The Association for Unmanned Vehicle Systems (AUVS) has sponsored a unique competition for universities to demonstrate a fully autonomous flying robot capable of navigating in a semi-structured environment, maintaining stability, searching for objects on the ground, and manipulating them once found. Conceived in 1990, the International Aerial Robotics Competition has grown in size and stature annually, attracting student teams from Europe, Canada, Asia, and the United States. The “aerial robotics" event brings academia, industry, and government together under the common goal of creating, on a small scale, some of the world’s most advanced and robust autonomous air vehicles.

During the first year of the competition, most teams were challenged by stable autonomous flight. By the spring of 1993, teams had progressed to the point where autonomous takeoff, navigation-driven flight, hover, and landing were possible. Vehicles could also locate and manipulate (capture) specific objects on the ground. The goal of the competition is to demonstrate a higher level of reasoning in the autonomous behavior of a UAV than is currently being pursued by the governments of the world. For this reason, the requirements of the competition are rigorous and nontrivial (Meyrowitz, Blidberg, & Michelson, 1996).

2.1.5 Unmanned Aerial Vehicle (UAV) Technology

An unmanned aerial vehicle (UAV) is an aircraft without a human pilot on board. The vehicle is controlled either autonomously by attached microprocessors or telemetrically by an operator on the ground. UAVs can be used to execute observation or detection missions through automatic or remote control. They are mainly used in mapping applications, environmental change monitoring, disaster prevention response, resource exploration etc. Compared to other flying vehicles and satellite remote sensing technology, UAVs have two advantages when capturing aerial photographs: low cost and high mobility. However, they have many environmental restrictions on their use due to low flight stability. Therefore, how to use UAVs in different scenarios so that spatial information for qualitative and quantitative analysis can be reliably processed and produced is an important issue impacting their application (Liu et ah, 2014).

2.1.5.1 A Typical UAV

There is a wide variety of UAV shapes, mechanisms, configurations, and characteristics. Since UAVs are usually developed for specific purposes, their hardware and software design can vary depending on task requirements. The following sections summarize the system design, implementation, and software of a typical present-day UAV (Liu et ah, 2014).

2.1.5.1.1 System Design

The system design of a typical UAV includes the following:

  • 1. Frame structure
  • 2. Electromechanics
  • 3. Flight controller
  • 4. Telemetry control system.
  • 1. Frame structure

The frame structure is the shape of the aircraft. It is usually designed according to an aircraft’s dynamic lifting method. For instance, fixed-wing aircraft (e.g., gliders) are able to fly using wings that generate lift via forward airspeed and wing shape. Another example is a rotary-wing aircraft (e.g., helicopter, quadcopter), which uses spinning rotors with aerofoil section blades to provide lift. The International Civil Aviation Organization (ICAO) defines a rotary-wing aircraft as “supported in flight by the reactions of the air on one or more rotors” (2009). Rotary-wing aircraft generally require one or more rotors to provide lift throughout the entire flight.

2. Electromechanics

Electromechanical components of a typical UAV include the following: flight controller with multiple sensors (including GPS, gyroscope, barometer, and accelerometer), motors, propellers, speed controllers, and batteries (see Figure 2.1 for an example of a hexacopter). Different motor speeds and propellers provide different performance. For example, the combination of high-speed motors and short propellers brings more agility and mobility for the aircraft but lower efficiency and shorter battery life.

3. Flight controller

A flight controller is a microprocessor on the aircraft that manipulates the power output of each motor to stabilize flight and respond to operator orders. There are many control algorithms, including variable pitch and servo thrust vectoring models. Variable pitch models

Hardware assembly of a hexacopter (Liu et al., 2014)

FIGURE 2.1 Hardware assembly of a hexacopter (Liu et al., 2014).

usually apply the cyclic differentially to non-coaxial propellers, allowing agile control and providing the potential to replace individual electric motors with belt-driven props hooked to one central motor (Cutler & How, 2012). Servo thrust vectoring uses differential thrust, as well as at least one motor mounted on a servo, which is free to change its orientation. This kind of algorithm is often used in bicopters and tricopters.

4. Telemetry control system

Common telemetry control systems use radio frequencies in various bands, such as FM, Wi-Fi, and microwave. The first general-use radio control systems in UAVs used single-channel analog equipment, and this allowed simple on-and-off switch control. More recent systems use pulse-code modulation (PCM) features to provide a computerized digital stream signal to the receiver, instead of analog-type pulse modulation.

2.1.5.1.2 System Implementation

When implementing a typical UAV (again taking a hexacopter as the example), we can divide the process into four main steps: frame assembly, electronics assembly, flight controller tuning, and optional equipment mounting.

The frame includes the body (to mount the flight controller and other electronics), arms (to mount the motors and speed controllers), and landing gear. Common materials for the frame assembly are aluminum and carbon fiber, which are light but have sufficient strength.

Electronic components include dynamic systems (propellers and motors), power connections (batteries and wiring), the flight controller, and telecommunication devices (e.g., radio system) (see Figure 2.2) (Liu et al., 2014).

Once the flight controller is well mounted, the variables must be tuned to adapt the controller to the frame and electronics assembly. PID (proportional-integral-derivative) is a generic control loop feedback mechanism widely used in UAV control systems. The PID controller attempts to minimize error by adjusting the process control inputs.

Finally, when the aircraft is ready to fly, various kinds of equipment can be mounted, depending on the task requirements. For example, a laser range finder and GPS unit integrated with a UAV affords the possibility of fetching 3D terrain information for geodesy inspection. UAVs with a digital camera and image telecommunication system allow practitioners to observe objects from high viewpoints and to explore unreachable or dangerous areas (Liu et al., 2014).

An example of a telecommunication control system (Liu et al., 2014)

FIGURE 2.2 An example of a telecommunication control system (Liu et al., 2014).

2.1.5.2 UAV Control

Four control problems need to be considered when implementing UAV applications:

  • 1. How many UAVs are required to achieve a task; this can be single or multiple (two or more).
  • 2. Whether the application is model-based or model-free: i.e., whether a mathematical dynamic model or control law for the UAV should be derived.
  • 3. Which of various control goals to pursue (e.g., stabilization/estimation of position or attitude, planning/tracking of path or target, obstacle/collision avoidance, cooperative formation flight, air/ground coordination, surveillance, or combinations thereof).
  • 4. Whether the device will be fossil fuel- or electric-powered.

As an example, we illustrate a general attitude control architecture for UAVs in Figure 2.3. In the figure, the overall control system is a combination of: reference position rr (t); the position controller; desired dynamics dcs(0, $dcs(0> 1/Ces(t); rotor speed differences between the nominal values Am,. Дтф, Ame, Дa>^ motor dynamics; vertical force Fi and moment Mi generated by the t-th rotor; rigid body dynamics; components of angular acceleration y, p(t), q(t), r(t) of the UAV in the body frame; and the actual position feedback r(t) (Liu et al., 2014).

A general attitude control architecture for UAVs (Liu et al., 2014)

FIGURE 2.3 A general attitude control architecture for UAVs (Liu et al., 2014).

2.1.5.3 Software

Software is important in controlling a UAV to acquire information from an aerial perspective; such software is also known as a ground control station (GCS). A GCS is typically a software application running on a computer on the ground that communicates with a UAV via wireless telemetry. It displays real-time data on the UAV’s performance and position and can serve as a remote cockpit. A GCS can also be used to control a UAV in flight, uploading new task commands and parameter configurations. Monitoring the live video stream is another common function of GCSs (Liu et al., 2014).

Stroumtsos, Gilbreath, and Przybylski (2013) developed GCS software for military use to eliminate the risk of disorientation and the misreading of numerical data by designing a graphical user interface. In addition, GCSs have been designed by applying other technologies, such as helmet-mounted displays (HMDs). Morphew, Shively and Casey (2004) showed the application of HMD to a target search task was more advantageous than using conventional computer monitors and joysticks. For some critical cases, simulation functions are required for GCS software. Reductions of schedule time, risk, and number of required test flights for complex aerospace tasks is a well-recognized benefit of utilizing prior simulation (Johnson & Mishra, 2002). Commercial UAV software is usually used to route craft through waypoints and provide functions such as fail-safes, return-to-home, and route editing. Recently, such software has been released on mobile device platforms, such as tablets and smart phones (Liu et ah, 2014).

2.1.5.4 UAV Fields of Application

The market for drones will explode in the next 30 years in the fields of agriculture, energy, public safety/ security, e-commerce/delivery, and mobility and transport. It is easy to find examples of the exploitation of drones for these different purposes in the literature. For instance, Lum and colleagues proposed using a drone to acquire multispectral images and the ground difference for precision agriculture (Lum, Mackenzie, Shaw-Feather, Luker, & Dunbabin, 2016). In the field of public safety and security, apart from the countless military applications, new sensors are being adapted for drones, including for X-ray cameras, IR cameras, and metal detectors. Applications for e-commerce and delivery are in the early stage of development. The impact of the weight on the battery duration and consequently on the distance remains problematic. However, the delivery of small objects is already a reality. Zipline (2017) describes the transportation of small medicines and blood in Africa using a fixed-wing UAV. In e-commerce, the big technology companies have made some initial proposals, such as the Prime Air service of Amazon (Amazon, 2016). The field of mobility and transport is evolving more slowly because of safety constraints and technology limitations, but it is also moving towards possible applications such as air taxis (Stimpson, Cummings, Nneji, & Goodrich, 2017).

UAVs are already being explored as useful tools in multiple civil scenarios. In many cases, UAV- based systems are required for surveillance and reconnaissance, monitoring, mapping and photogram- metry, automatic fault detection, or inventory tasks. This section focuses on inspection missions, for for example, photovoltaic plants, the environment, roads, cell towers, railway lines, mines, and buildings. Drones are starting to be applied in many other inspection scenarios, such as power lines, levees and embankments, confined spaces, ecology, wind turbines, cranes, and real estate. Traditional inspection procedures in most of these cases are costly, time consuming, repetitive, labor-intensive, and technically difficult. The use of UAV alleviates and improves maintenance and risk prevention processes (Besada et al., 2018).

Matsuoka and colleagues showed that using UAV to detect different failures of photovoltaic modules is much faster and more effective than traditional methods. The authors measured the deformation of a large-scale solar power plant using images acquired by a nonmetric digital camera on board a micro- UAS (Matsuoka et al., 2012). Arenella and colleagues documented a technique to detect hot spots in photovoltaic panels (defects causing destructive effects) by analyzing the sequence of thermal images (Arenella, Greco, Saggese, & Vento, 2017). Some researchers have employed UAVs to inspect a photovoltaic array field, using diverse thermal imaging cameras and a visual camera.

Monitoring environmental gases for risk assessment both indoors (gas leaks, fires, mining applications, etc.) and outdoors (agriculture biomass burning emissions, chemical and biological agent detection studies, etc.) may require long periods of observation and a large number of sensors. UAV may substantially complement existing ground sensor networks. For this purpose, a UAV has to be equipped with sensors capable of determining volatile chemical concentrations and detecting gas leakages. Kersnovski, Gonzalez, and Morton proposed a UAV with an onboard camera and a carbon dioxide gas sensor capable of performing autonomous gas sensing while simultaneously visually detecting predefined targets placed at locations inside a room. The system transmits the collected data in real time to a GCS for visualization and analysis through a Web interface (Kersnovski, Gonzalez, & Morton, 2017).

Soil pollution monitoring is another application of UAV technology. For example, Capolupo and colleagues proposed a multi-sensor approach to copper detection proposed system was able to predict copper accumulation points, using a combination of aerial photos taken by drones, micro-rill network modeling, and wetland prediction indices (Capolupo, Pindozzi, Okello, Fiorentino, & Boccia, 2015). UAVs may also be used to inspect contaminated areas, such as in fission reactors for leakage detection, in storage areas of nuclear sources, or even in hazardous scenarios of nuclear disasters. Tang and Shao focused on delivering a system that surveys forests, maps canopy gaps, measures forest canopy height, tracks forest wildfires, and supports intensive forest management (Tang & Shao, 2015). In marine ecology, UAVs may be used, for example, to produce very fine scale maps of fish nursery areas. Some authors have detailed the procedure of aerial photo acquisition (drone and camera settings) and post-processing workflow (i.e., 3D model generation using the motion algorithm and photo-stitching) (Besada et ah, 2018).

Road inspection UAV-supported procedures may help to detect early signs of erosion and pavement distress. Branco and Segantine proposed a methodology to automatically obtain information about the conditions of highway asphalt from data collected through remote sensing using a UAV and specific image processing and pattern recognition techniques (Branco & Segantine, 2015).

Railway infrastructure inspection includes camera-based sensing and control methods. In one scenario, the UAV performs infrastructure inspection in close but difficult-to-access areas (such as long bridges or tracks separated from the road by, e.g., a river). A second scenario is oriented to the railway track and records the infrastructure, including tracks, sleepers, points, or cabling. Target detection is carried out using different image descriptors (Speeded-Up Robust Features (SURF), Scale Invariant Feature Transform (SIFT), Features from Accelerated Segment Test (FAST), and Shi-Tomasi), and edge detectors are used for line detection.

Power line detection is another area of research. Santos and colleagues presented a vision-based power line detection algorithm and tested it in multiple backgrounds and weather conditions (Santos et ah, 2017). In the energy sector, the use of drones for the maintenance of power lines and transmission towers is already widespread. New diagnosis techniques based on drone use are emerging to improve the detection of problems. Electric tower detection, localization, and tracking were studied by Martinez and colleagues; these researchers proposed a combination of classic computer vision and machine learning (ML) techniques (Martinez, Sampedro, Chauhan, & Campoy, 2014). Priest described a cell tower inspection procedure in which an operator using a UAV and a processing device creates a model of the cell site and compares it to models created in subsequent inspections to determine significant differences (Priest, 2017). Finally, 5G advances, secure Internet of Things (IoT), and swarms of UAVs have been combined into an architecture to guarantee service in critical infrastructures (distributed generation plants, energy transmission and distribution networks, such as electricity cables and electrical isolators and natural gas/ liquefied natural gas, tanks, pumps, and pipelines) (Zahariadis, Voulkidis, Karkazis, & Trakadas, 2017).

UAVs are also applied to construction management. In one application, the safety inspection of construction sites was addressed by De Melo and colleagues; they used a drone to provide visual assets to verify the safety checklists in two different projects (De Melo, Costa, Alvares, & Irizarry, 2017). Building inspection is another common application; for example, inspecting a rooftop using a UAV to extract information from a damaged area (Besada et ah, 2018).

The project iDeepMon (Benecke, 2018) aims at enhancing shaft surveying technologies to create a fully UAV-based automated process, integrated into the overall control process of an autonomous mine. The mining environment poses defined constraints (variable lighting, etc.). Besada and colleagues described hybrid equipment using a helium gas-filled balloon, with remote-controlled quadcopter propellers, powerful LED lighting, rechargeable batteries, remote-controlled cameras, image stabilizers, and radio frequency transmitters for control and image visualization (Besada et al„ 2018).

Many drone applications require photogrammetric data capture of complex 3D objects (buildings, bridges, monuments, antennas, etc.). Saleri and colleagues discussed an operational pipeline tested for photogrammetry in an archaeological site. Algorithms such as structure from motion and multi-view stereo image matching facilitate the generation of dense meshed point clouds (Saleri et al„ 2013). Cefalu and colleagues detailed an automatic flight mission planning tool; it generates flight lines while aiming at camera configurations, which maintain a roughly constant object distance, provide sufficient image overlap, and avoid unnecessary stations, based on a coarse digital surface model and an approximate building outline (Cefalu, Haala, Schmohl, Neumann, & Genz, 2017).

The mission type obviously determines the measurement procedure to be completed (in terms of flight type, sensing payload, and required measurements) (Gonzalez-Jorge, Martinez-Sanchez, Bueno, & Arias, 2017). All these applications need tools to accelerate and partially automate the creation of missions, calculate optimal trajectories, and automatically execute parts of the mission with the least human intervention to achieve cost-effectiveness. Although it is possible to automate the procedure for very specific cases, it is difficult to build a generalizable automated system (Besada et al„ 2018).

 
Source
< Prev   CONTENTS   Source   Next >