Sensors for Autonomous Vehicles in Infrastructure Inspection Applications
Sensors and Sensing Strategies
Autonomous ground vehicles are important examples of mechatronic systems. Although they have clear promise, autonomous ground vehicles still have numerous challenges in sensing, control, and system integration. This section will look at these challenges in the context of off-road autonomous vehicles, automated highway systems, and urban autonomous driving.
4.1.1 Sensors
A sensor can detect the physical environment and forward the information for processing. Sensors commonly consist of two components: a sensitive element and a transducer. The sensitive element interacts with the input, and the transducer translates the input into an output signal that can be read by a data acquisition system (Matsson, 2018).
4.1.1.1 Sensor Error
The absolute sensor error is the difference between the sensor output and the true value. The relative sensor error is the difference divided by the true value.
4.1.1.2 Noise
Sensor noise is unwanted fluctuation in the sensor output signal when the true value is kept constant. The variance of the noise is an important parameter in sensor characteristics. White noise is a random signal where all frequencies contain equal intensity.
4.1.1.3 Drift
Sensor drift is an unwanted change in sensor output while the true value is kept constant.
4.1.1.4 Resolution
The resolution is the minimal change that the sensor can detect.
- 4.1.2 Inertial Sensors
- 4.1.2.1 Accelerometers
An accelerometer measures its own acceleration relative to an inertial reference frame. The function is comparable to a damped mass on a spring. When the sensor is exposed to an acceleration, the mass will be displaced. The displacement can be measured using the capacitive or piezoresistive effects.
Ill
A capacitive accelerometer uses the moving mass as a capacitor, changing the capacitance as it moves. A piezoresistive accelerometer uses the change in a material’s electrical resistivity when it is deformed (Matsson, 2018).
4.1.2.2 Gyroscopes
A gyroscope measures angular rate relative to an inertial reference frame. Early gyroscopes used a spinning mass supported by gimbals. The conservation of angular momentum keeps the spinning mass leveled when the support is tilted, and the angular difference can be measured (Matsson, 2018).
4.1.2.2.1 Optical Gyroscopes
Optical gyroscopes use the Signac effect. If two pulses of light are sent in opposite directions around a stationary circular loop, they will travel the same inertial distance and arrive at the end simultaneously. But if the loop is rotating and two light pulses are again sent in opposite directions, the light pulse traveling in the same direction as the rotation will travel a longer inertial distance and arrive at the end later. Using interferometry, the differential phase shift can be measured and translated into angular velocity. This type of gyroscope is used in airplanes.
4.1.2.2.2 Vibrating Gyroscopes
Microelectromechanical system (MEMS) gyroscopes are commonly vibrating gyroscopes. This type of gyroscope consists of a vibrating mass mounted on a spring. If the mass is oscillating in the x-axis and a rotation about the z-axis is applied, an acceleration in the у-axis is induced. This acceleration is called Coriolis acceleration and is given by
where v is the velocity of the mass, and Q. is the angular rate of rotation.
The angular rate is, thus, given by the velocity of the oscillating mass and found by measuring the force which induces the Coriolis acceleration.
4.1.2.3 Rotary Encoders
Rotary encoders can be divided into absolute and incremental encoders. Absolute encoders can indicate the angular position of the shaft. The position is given by an encoded disc that rotates together with the shaft. Various different techniques are used to read the encoded disc, for example, mechanical or optical techniques. The incremental encoder cannot indicate the angular position, but it will indicate the incremental changes in angular rotation. Each increment of angular rotation produces an impulse in the sensor output (Matsson, 2018).
- 4.1.3 Absolute Measurements
- 4.1.3.1 Global Navigation Satellite System (GNSS)
Operational systems with global coverage are the United States’ Global Positioning System (GPS) and Russia’s GLONASS. Several other systems are scheduled to be operational by 2020, for example, Europe’s Galileo and China’s BeiDou-2. Other countries, such as India, Japan, and France, are also developing their own GNSS (Matsson. 2018).
4.13.1.1 GPS
GPS is divided into three segments: a space segment, a control segment, and a user segment.
4.1.3.1.2 Space Segment
The space segment originally consisted of 24 satellites divided into six circular orbits, with four satellites in each orbit. Today, there are a total of 31 operational satellites in the GPS constellation. The satellites are orbiting in the Medium Earth Orbit at an altitude of approximately 20,000.00km. The orbits have a 55° inclination from the equator, and the orbital period is 12 h. The constellation ensures that at least four satellites are visible at any place on the earth at any given time.
4.1.3.1.3 Control Segment
The control segment is a global network of ground facilities. Its purpose is to control and maintain the system. The control segment consists of monitoring stations, ground antennas, and a master control station. There are 16 monitoring stations and 11 ground antennas spread around the world.
The monitoring stations track the satellites, collect GPS signals, and forward the information to the master control station.
The ground antennas communicate with the satellites via the S-band. They send commands and upload navigation data and program codes.
The master control station is located in Schriever Air Force Base in Colorado, United States. It commands and controls the satellites. It also collects data from the monitoring stations and computes the location of each satellite. The system is monitored to ensure system health and accuracy. Satellites can be repositioned to maintain an optimal constellation.
4.1.3.1.4 User Segment
The user segment consists of the receivers of the GPS signals. The receivers receive the coded signals and estimate position, velocity, and time.
4.1.3.2 Magnetometers
Magnetometers can measure the local magnetic field using the Hall effect. Magnetometers consist of a thin sheet of semiconducting material. In a magnetic-free environment, the electrons in the thin sheet are evenly distributed, and the potential difference is zero. When a magnetic field is present, the electrons will distribute unevenly, inducing a potential difference. The potential difference is measured and translated into magnetic flux density (Matsson, 2018).
- 4.1.4 Sensing
- 4.1.4.1 Sensing the Surroundings
In this chapter, we assume the “internal sensing” of a car, i.e., determination of vehicle speed, steering angle, engine, braking wheel torque, etc., is possible, and the process is similar for all three application domains of concern: off-road autonomous vehicles, automated highway systems, and urban autonomous driving. This section concentrates on the external sensing, as this will be different in each case.
The sensors used for the comparison and their coverage footprints are given in Figure 4.1a-c (Ozguner & Redmill, 2008).
The set of sensors used on roadway vehicles depends on the infrastructure available. Basic lane detection can be accomplished by vision, assuming clear detection opportunities (Redmill, 1997). However, aids installed on the roadway with different technologies are certainly useful. These include magnetic nails (Zhang, 1997; Tan, Rajamani, & Zhang, 1998; Shladover, 2007) and radar reflective stripes (Redmill & Ozguner, 1999; Farkas, Young, Baertlein, & Ozguner, 2007). Off-road vehicles don’t need an infrastructure, but they do need more sensor capability, especially information on ground surface level detection, so as to compensate for the terrain, such as bumps and holes (Ozguner & Redmill., 2008).

FIGURE 4.1 Sensor suite and footprints: (a) OSU-ACT, (b) OSU-ION, and (c) OSU Demo’97 cars (Ozguner & Redmill, 2008).
4.1.4.2 Sensor Fusion
There are some distinctions between sensor fusion for highway, urban, and off-road applications (Ozguner & Redmill, 2008): [1]
In general, we can identify two approaches to sensor fusion: a grid, or occupancy map, and track identification.
In a grid map, the sensing architecture/sensor fusion is established by developing a discrete map of the vehicle’s surroundings. All external sensors feed into this map, with obstacles sensed and related confidence levels recorded. The map is maintained internally in vehicle-centered world coordinates. The map doesn’t rotate with the vehicle, but it does translate its movement. The sensor fusion algorithm implemented on OSU’s 2005 Defense Advanced Research Project Agency (DARPA) Grand Challenge vehicle (see Chapter 2) uses such a grid occupancy approach (Redmill, Martin, & Ozguner, 2006).
Because traffic situations are highly dynamic, OSU-ACT has moved to an approach in which the sensor fusion algorithm is responsible for clustering and tracking all objects seen by the sensors.
The sensor fusion algorithm first uses information about the position and orientation of the sensors with respect to the vehicle to transform the returned information into a vehicle-centered coordinate system. The primary sensors, the LiDAR (light detection and ranging) suite, provide a cloud of points representing each reflection from some surface of all the targets in the world. Once the returns from the LiDARs are in vehicle-centered coordinates, the position and orientation of the vehicle with respect to the world are used to transform the LiDAR returns into world coordinates. After the LiDAR returns have been transformed into world coordinates, they are clustered into groups of points. The clustering algorithm places the laser returns into a disjoint set data structure using a union find algorithm. Ultimately, clusters of laser returns are found whose members are not farther than some maximum distance from each other.
Once the LiDAR returns have been clustered, the clusters are analyzed and those that can be identified as vehicles, based on shape and motion, are classified as such, and their centroids are estimated. All resulting clusters must be tracked using dynamic filters. Vehicle detections that are returned by the vision system or the radar sensors are matched to a LiDAR-generated cluster by looking for a LiDAR cluster within some distance threshold. If no suitable matching cluster is found, the detections may update or initialize a track without a corresponding LiDAR cluster. The output of the sensor fusion algorithm is a list of tracks. Each of the resulting tracks has a position and velocity, and the general size and shape of the point cluster supporting the track is abstracted as a set of linear features (Ozguner & Redmill, 2008).
- [1] For pure highway applications, a very restricted approach may be appropriate, as there are anumber of distinct concerns, for example, lane edge location and static or moving obstacles inrelevant locations. If we simply fuse data related to specific tasks, however, we will not necessarily get a complete and integrated representation of the world. • For off-road applications, the software or hardware needs to provide compensation for vibration and other vertical and rolling motions, for example, using the inertial measurement unit(IMU) and sensor data to specifically generate a “ground plane” that can be referenced forsensor validation and fusion. Sensor adjustments are also required to deal with dust, rain, andchanging lighting conditions. • For domains where there are many moving obstacles (i.e., urban applications), we may need to“track” individual obstacles all the time. • Specific operations (parking, dealing with intersections, entering/exiting highways, etc.) mayuse totally separate sensing and sensor architectures. This may include information providedby the infrastructure or other vehicles through wireless communication.