Data Acquisition and Intelligent Diagnosis

Data Acquisition Principle and Process for Laser Scanning, Visual Imaging, Infrared Imaging, UV Imaging

Visual Imaging, Infrared Imaging, UV Imaging

  • 5.1.1 Laser Scanning
  • Introduction

Automated restitution methods for object acquisition have gained more and more importance in recent years. Automatic image matching and laser scanning, often called LiDAR (light detection and ranging), have revolutionized 3D data acquisition for both topographic and close-range objects. In contrast to the “classical” manual data acquisition techniques, like terrestrial surveying and analytical photogram- metry, which require manual interpretation to derive a representation of the sensed objects, these new automatic recording methods allow an automated dense sampling of an object’s surface in a short time (Pfeifer & Briese, 2007).

Laser scanning is an emerging data acquisition technology that has remarkably broadened its application field to seriously compete with other surveying techniques. Scanning can be airborne or terrestrial. Terrestrial laser scanning is a reasonable alternative method in many kinds of applications that previously used ground-based surveying or close-range photogrammetry (Lovas, 2010).

Photographic images record passive solar or artificial radiation backscattered by objects in the camera’s field of view (FOV). The backscatter strength is typically resolved in one of the following ways:

i. Spatially, by pixels in the image plane;

ii. Chromatically, by recording in different wavelength bands;

iii. Radiometrically, by quantizing the photo current with typically 8-12 bits.

In contrast, laser scanning achieves spatial resolution by scanning the instantaneous FOV with the help of mechanical devices, for example, a moving mirror, over the entire FOV. The backscatter is recorded for one wavelength only, i.e., monochromatically. Nonpassive radiation is used for the measurement, but the backscatter of laser energy is emitted by the sensor system itself. The time lag between emission of a laser beam and detection of its backscattered echo is measured as well. With the group velocity of the light and the light speed in the atmosphere, the time difference can be transformed to the range between emitter and detector.

Both photographs and the recorded echoes of laser beams are impaired—to a small extent only—by ambient light, i.e., energy not originating from the location of the specific sensed objects but stray light. Photographs record texture and color; laser scanning primarily measures ranges but also monochromatic reflectance. In both cases, however, data are acquired area-wise in a systematic manner. Recording electromagnetic radiation by a map (ping) that generates an image of object space is the basis for both. In images, measurements can be performed automatically or manually (Pfeifer & Briese, 2007).

In recent years, with the development of sensor fusion techniques, navigation solutions, including inertial measurements, and urban modeling, mobile laser scanning has gained momentum, as can be seen in the sensor manufacturers’ product lists. A significant paradigm change can be observed in geodesy, for example, in direct orientation instead of indirect orientation, in surface detection instead of point measurements, and in complex 3D models instead of simple coordinates.

All scanners are based on the same principle: the scanner emits a laser beam through the ground/ object and computes the distance by measuring the traveling time or the phase difference of the laser beam. The emission rate of cutting-edge sensors is in the 100-200kHz range. The direction of the beam is determined by different types of rotating or oscillating mirrors that enable the scanning of the area of interest. In airborne and mobile laser scanning, the position of the sensor is given by high accuracy Global Navigation Satellite Systems (GNSSs) and inertial navigation systems (INSs).

Laser scanning is often referred to as LiDAR. ALM (airborne laser mapping) or ALS (airborne laser scanning) abbreviations are also widely used, while “terrestrial laser scanning” is used for ground-based laser scanning (Lovas, 2010). Airborne Laser Scanning (ALS)

ALS is an active remote sensing technology that is able to rapidly collect data from huge areas. The resulting dataset can be the basis for digital surface and elevation models. ALS is often coupled with airborne imagery; in this coupling, the point clouds and images can be fused, resulting in enhanced quality 3D products.

The basic principle is as follows: the sensor emits a laser pulse through the terrain in a predefined direction and receives the reflected laser beam. If the speed of light is known, the distance of the object can be calculated; see Figure 5.1 (Lovas, 2010).

Airborne LiDAR systems are composed of the following subsystems:

  • • Laser sensor and computing, data storage unit
  • • INS/IMU (inertial measurement unit)
  • • GNSS
  • • GNSS ground station(s).

The components are shown in Figure 5.2:

A collection of point clouds from airborne platforms always requires that the path of the platform, i.e., its position and angular attitude, is observed continuously. With a pulse repetition rate (PRR) of 100 kHz, 100,000 range and scanner angle measurements are made per second, and for each one, the sensor coordinate system has its own exterior orientation. For increasing accuracy, some form of ground control is necessary, requiring methods of strip adjustment.

Data in ALS are acquired strip-wise. A strip length of 20km is not uncommon, but the strip length cannot be made arbitrarily long because of drift errors in the IMU, which are corrected after flight

Time-of-flight laser range measurement (Lovas, 2010)

FIGURE 5.1 Time-of-flight laser range measurement (Lovas, 2010).

Principle of airborne LiDAR (Lovas, 2010)

FIGURE 5.2 Principle of airborne LiDAR (Lovas, 2010).

maneuvers. Wider areas are acquired by placing strips next to each other with an overlap to avoid gaps. Larger overlaps can increase accuracy in strip adjustment (see below) or increase the point density (e.g., by a strip overlap of 50%).

Laser scanning from a ground-based or water-based moving platform is similar to ALS, except the laser scanning is not primarily vertical and data acquisition follows the allowed routes (e.g., streets) rather than a systematic scanning of the project area (Pfeifer & Briese, 2007). LiDAR Work Phases and Data Processing

Based on the particular application or specific requirements of a project, different work flows are executed. However, the main steps are often the same for many applications. A typical LiDAR application has the following work phases:

  • • Planning (coverage, point density, flying parameters, etc.)
  • • Deployment of GNSS base stations (if needed)
  • • Calibration of equipment (e.g., to determine certain misalignments)
  • • LiDAR surveying.

LiDAR data processing can be grouped in many ways. Assuming a typical airborne LiDAR project to derive a digital surface model (DSM), for example, for mobile communication companies to support precisely deployed base stations and antennas, the following data processing steps are required: [1]

  • • Data fusion: Warping of airborne images onto DSM etc.
  • • Measurements on model, advanced feature extraction.

Many of these data processing steps can be automated; certain procedures built into processing software are capable of classifying features, recognizing building roofs, trees, etc.

LiDAR data are usually stored in LAS format, but many manufacturers apply self-developed file formats. The LAS file format is a public file format for the interchange of LiDAR data between vendors’ customers. This binary file format is an alternative to proprietary systems or generic ASCII file interchange systems used by many companies (Lovas, 2010). Intensity Data

Recent LiDAR systems are capable of measuring the signal strength (i.e., intensity) of the reflected laser pulse. Different objects have different reflectivity; therefore, the intensity values can support object recognition and identification.

Note that LiDAR intensity values vary based on light and weather circumstances; therefore, they are seldom used alone without additional data for classification (Lovas, 2010). Terrestrial Laser Scanning

While ALS competes with photogrammetry, interferometric radar (Lovas, 2010) and so-called window scanners have a FOV similar to a conventional area camera or a panoramic scanner (Pfeifer & Briese, 2007). Traditional close-range photogrammetry does not have a wide application field and is not considered a competing technology in planning projects (Lovas, 2010).

In the early 2000s, terrestrial laser scanning was used only for specific tasks, for example, surveying and modeling complex systems (e.g., cooling/heating pipes) of a factory. Recently, terrestrial laser scanning has broadened its application area and is used in projects conventionally considered to be in the field of traditional surveying (e.g., small-scale topographic surveys, road construction, in-door surveying of buildings).

The principle of terrestrial laser scanning is very similar to that of airborne surveying. The sensor continuously emits a laser beam towards an object, receives it back, and computes the distance of the object. The beam is directed by rotating or oscillating mirrors (usually the same mirror can operate both in rotating and oscillating mode). The main components of a terrestrial laser scanner are shown in Figure 5.3 (Lovas, 2010).

Components of terrestrial laser scanner (Lovas. 2010)

FIGURE 5.3 Components of terrestrial laser scanner (Lovas. 2010).

The main difference is that the sensor is not moving but mounted on a tripod or any kind of structure. Therefore, no positioning solution is needed; however, newer scanners are able to be directly connected to Global Positioning System (GPS) receivers to directly obtain the scanner station location (Lovas, 2010).

Normally, one scan is not enough to collect data covering the entire object of interest. If the scanner is placed inside, occlusions prevent all details being seen from one standpoint. Scanning the outside of an object requires more standpoints to scan the object from all sides. Three observations are made to measure one point on the object surface: the range r and two angles, a, the horizontal angle, and /(, the vertical angle. In the sensor coordinate system, the coordinates (x, y, z) of the point are obtained by a conversion from the spherical to the Cartesian coordinate system:

If only the object itself is of interest, it is sufficient to determine the relative orientation of the scans. If the object also has to be placed in a superior coordinate system, absolute orientation becomes necessary. If the superior coordinate system is earth-fixed, this is called georeferencing (Pfeifer & Briese, 2007). Data Processing

The workflow of the terrestrial laser scanning procedure can be summarized as follows (Lovas, 2010):

  • • Preparation (planning, preliminary geodetic measurements, etc.)
  • • Scanning
  • • Registration/georeferencing (if required and if preparation and scanning was done)
  • • Selecting area of interest (optional)
  • • Filtering and converting data
  • • Segmenting and classifying data
  • • Modeling (triangulation, rendering, fitting geometrical elements onto a point cloud)
  • • Measurements on model
  • • Visualization
  • • Application-dependent products (e.g., cross sections for architects).

As in most engineering projects, project planning and preparatory work must be emphasized, including creating the geodetic network (if needed), checking field of view, and planning scan station locations. Besides saving costs, optimal scan station locations achieve the required coverage, point density, and visibility of dedicated parts of the object and ensure the desired accuracy.

Scanning usually starts with panorama scanning, i.e., a low resolution surveying of the surrounding area. Then the area to be mapped can be selected on this point cloud or specified by corner coordinates or angular ranges. Specific points (e.g., control points or dedicated points of a structure whose displacement has to be measured) can be marked with specific targets. These are special stickers or objects (usually in a form of disc, cylinder, or sphere) with extremely high reflectivity. The scanner software is able to recognize the reflectors in range and provide the coordinates of the middle of the reflectors. Careful planning is crucial when deploying the reflectors. If repeatability is an issue (e.g., in monitoring and quality control projects), the reflector locations have to be marked in such a way that they will remain until the next survey. The protection of reflectors also has to be solved for scans in open areas.

In registration and georeferencing, the resulting point cloud(s) and images have to be transformed into a given coordinate system. In many cases, there is no need to transform the data into a local coordinate system; measurements can be executed in the scanner’s own coordinate system. The images taken by the scanner camera or by a camera mounted on a scanner are usually warped onto the point cloud by the scanner software.

Scanners capture points reflected from all objects in the FOV and range; thus, selecting the area of interest (i.e., points reflected from the object(s) to be mapped) has to be done before modeling and any kinds of measurements.

Before modeling, further preprocessing steps are needed: converting the point cloud into the required format (depending on the processing software’s requirements), filtering outliers, and interpolating points into a predefined grid (if needed). In other words, achieving a clean dataset that meets the requirements of further processing steps.

In some applications, segmentation and/or classification of certain areas in the point cloud are needed, but in most cases, the entire dataset has to be modeled. Modeling can cover wide ranges of procedures, for example, triangulating surfaces, fitting geometric elements on the point cloud, detecting edges, and creating a vector model.

Some applications, such as engineering surveys, need to derive particular values, for example, displacement between two or more dedicated points or the measurement of certain distances. In some cases, these values can be obtained without modeling, simply executing measurements on the point cloud. Note that there are specific point cloud processing software packages available on the market.

Visualization is more important in laser scanning than in other geodetic procedures; the results must be presented to the customers, users, and decision makers in an easily understandable form.

Specific applications may require specific products, such as cross and longitudinal sections for architectural design purposes, specific distance and volume calculations of artifacts for archaeological surveys, deformation measurements at special parts of structures, and surface material features. These kinds of specific products often require intense consultation with experts in the area.

Note that it is recommended to include a detailed description of the end product of laser scanning in the project contract. The raw point cloud or even a 3D model can be useless for the user if the required measurements, evaluations, or any kinds of assessments cannot be executed. For cross-checking purposes, consultation with independent experts or, at least, gathering information from other projects, is highly recommended (Lovas, 2010). Mobile Laser Scanning

The most recent application of laser scanning is the mobile mapping application, where the scanner is mounted on a mobile platform, mostly on a passenger car or truck. Mobile laser scanning is used in projects where big areas (or long corridors) are covered on the ground and data are to be acquired for those areas (Lovas, 2010). Data Processing

The main work phases of processing mobile laser scanned data are as follows:

  • • Calculating position and orientation of the sensor platform
  • • Georeferencing the point cloud and registering the images
  • • Coarsely classifying points (e.g., ground, vegetation, building, and other)
  • • Measuring, evaluating, and modeling according to the particular application.

Sensor position/trajectory and orientation are generally supported by Kalman filtering. Georeferencing and registration of point clouds are the main differences between terrestrial and mobile laser scanning. Since an urban environment has areas where no GNSS signal is available (or there is a less accurate signal), and INS provides sufficient accuracy for a limited range, careful planning of measurement is needed. These factors have to be considered during the accuracy assessment (Lovas, 2010).

  • 5.1.2 Visual Imaging
  • What Is Visual Information?

Two kinds of information are associated with a visual object (image or video): information about the object, called its metadata, and information contained within the object, called visual features. Metadata are alphanumeric and generally expressible as a schema of a relational or object-oriented database.

Visual features are derived through computational processes—typically image processing, computer vision, and computational geometric routines—executed on the visual object (Gupta & Jain, 1997).

The simplest visual features that can be computed are based on pixel values of raw data, and several early image database systems used pixels as the basis of their data models. These systems can answer such queries as the following (Gupta & Jain, 1997):

  • • Find all images for which the 100th to 200th pixels are orange if orange is defined as having a mean value of (red = 255, green = 130, and blue=0).
  • • Find all images that have about the same color in the central region of the image as this particular one. The “central region” of the image can be specified by a coordinate system, and the expression “about the same color” is usually defined by computing a color distance. A variant of the Euclidean distance is often used to compare two color values.
  • • Find all images that are shifted versions of this particular image, in which the maximum allowable shift is D.

If the user’s requirements are satisfied with this class of queries, data modeling for visual information is almost trivially simple. More realistically, however, a pixel-based model suffers from several drawbacks. First, it is very sensitive to noise; therefore, a couple of noise pixels may be sufficient to cause it to discard a candidate image for the first two queries. Second, translation and rotation invariance are often desirable properties for images. For example, for the third query, if the database contains a 15° rotated version of this image, the rotated version may not be reported by the system. Third, apart from noise, variations in illumination and other imaging conditions affect pixel values drastically, leading to incorrect query results (Gupta & Jain, 1997).

That is not to say pixel-oriented models are without merit. Significant video segmentation results can be obtained by measuring pixel differences over time. For example, an abrupt scene change can be modeled by finding major discontinuities in time plots of cumulative pixel difference over frames (Hampapur, 1995). However, information retrieval based only on pixel values is not very effective (Gupta & Jain, 1997).

Consider a database of aerial images in which the only objects of interest are buildings, ground vehicles, aircraft, roads, and general terrain. Also imagine that a human interpreter draws bounding rectangles for each region in an image in which one or more of these five kinds of objects appear and labels the regions accordingly. Now we have a fairly precise specification of the information contained in the images. That information can be directly modeled by a relational database schema that maintains the location (bounding box) of each object type and a time stamp for each image. With some additional geometric processing added to this relational model, we can answer very complex queries (Gupta & Jain, 1997), such as the following:

  • • Is there any location where more than five ground vehicles are close to a building located in the middle of the general terrain?
  • • Have there been any changes in the position of the aircraft at this location in the past couple of hours?
  • • Which approach roads have been used by ground vehicles over the past few days to come close to the aircraft?

While these queries are meaningful, the most crucial part of information retrieval—information extraction—is performed by a human using his or her knowledge and experience in aerial image interpretation. The reason this task requires a human is simple. Fully automatic interpretation of aerial images is an unsolved research problem. If the human extracts the useful information, we can then use a spatial database system to organize and retrieve it. In a real-life aerial surveillance situation, this approach is unrealistic. For a battlefield application, the territory under surveillance is large enough to need several camera-carrying aircraft. Images from every aircraft, each image several MB in size, stream in at the video rate of 30 frames per second. The high influx of images means error-free interpretation takes a long time; hence, the simple image database scenario we painted is not practical for any time-critical operation (Gupta & Jain, 1997). What Is Image Processing?

Image processing is a method to convert an image into digital form and perform some operations on it to get an enhanced image or to extract some useful information. It is a type of signal dispensation in which input is an image, like a video frame or photograph, and output may be an image or characteristics associated with that image. An image processing system usually treats images as two-dimensional signals and applies preset signal processing methods.

It is a rapidly growing technology, with applications in various aspects of a business. Image processing also represents a core research area within engineering and computer science disciplines (Mary" 2011).

Image processing basically includes the following three steps (Mary, 2011):

  • • Importing the image with an optical scanner or by digital photography.
  • • Analyzing and manipulating the image, including data compression and image enhancement, and spotting patterns that are not visible to the human eye, as in satellite photographs.
  • • Outputting an altered image or report based on image analysis.
  • Purpose of Image Processing

The purpose of image processing can be divided into five groups (Mary, 2011):

  • 1. Visualization: Observing objects that are not visible.
  • 2. Image sharpening and restoration: Creating a better image.
  • 3. Image retrieval: Seeking the image of interest.
  • 4. Measurement of pattern: Measuring various objects in an image.
  • 5. Image recognition: Distinguishing the objects in an image.
  • Methods Used for Image Processing

The two types of methods used for image processing are analog and digital image processing.

Analog techniques of image processing can be used for hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation for these visual techniques. The image processing is not confined to an area that has to be studied but includes the knowledge of the analyst. Association is another important tool in image processing using visual techniques. In association, analysts apply a combination of personal knowledge and collateral data to image processing.

Digital processing techniques help the manipulation of the digital images by using computers. Raw data from imaging sensors on satellite platform contain deficiencies. To get over such flaws and to obtain original information, the raw data must undergo various phases of processing. These are preprocessing, enhancement and display, and information extraction (Mary, 2011) (Figure 5.4).

  • Image Processing Applications
  • 1. Intelligent transportation systems: This technique can be used in automatic number plate recognition and traffic sign recognition.
  • 2. Remote sensing: For this application, sensors capture pictures of the earth’s surface in remote sensing satellites or multispectral scanners mounted on an aircraft. These pictures are processed by transmitting them to the earth station. They are used in flood control, city planning, resource mobilization, agricultural production monitoring, etc.
Flow chart showing phases in digital image processing (Mary, 2011)

FIGURE 5.4 Flow chart showing phases in digital image processing (Mary, 2011).

  • 3. Moving object tracking: This application enables the measurement of motion parameters and the acquisition of a visual record of the moving object. The different approaches to tracking an object are:
    • • Motion-based tracking
    • • Recognition-based tracking.
  • 4. Defense surveillance: Aerial surveillance methods are used to continuously keep an eye on the land and oceans. This application is also used to locate the types and formation of naval vessels on the ocean surface. The important duty is to divide the various objects present in the water body part of the image. Different parameters, such as length, breadth, area, perimeter, and compactness, are established to classify each object. It is important to recognize the distribution of these objects in all directions to explain all possible formations of the vessels. We can interpret the entire oceanic scenario from the spatial distribution of these objects.
  • 5. Biomedical imaging techniques: Various types of imaging tools, such as X-ray, ultrasound, and computer-aided tomography (CT), are used for medical diagnosis. X-ray, magnetic resonance imaging (MRI), and CT are shown in Figure 5.5 (Mary, 2011).
Representational image of X-ray. MRI, and CT (Mary, 2011)

FIGURE 5.5 Representational image of X-ray. MRI, and CT (Mary, 2011).

Some biomedical imaging applications are the following (Mary, 2011):

  • • Heart disease identification: Important diagnostic features, such as size of the heart and its shape, are required to classify heart diseases. To improve the diagnosis of heart diseases, image analysis techniques are applied to radiographic images.
  • • Lung disease identification: In X-rays, the regions that appear dark contain air while regions that appear lighter are solid tissues. Bones are more radio opaque than tissues. The ribs, heart, thoracic spine, and diaphragm are clearly seen on an X-ray film.
  • • Digital mammograms: These are used to detect breast tumors. Mammograms can be analyzed using image processing techniques, such as segmentation, shape analysis, contrast enhancement, and feature extraction.
  • 6. Automatic visual inspection system: This application improves the quality and productivity of products.
  • • Automatic inspection of incandescent lamp filaments: This involves examination of the bulb manufacturing process. Because there is no uniformity in the pitch of the wiring in a lamp, the filament of the bulb becomes fused within a short period of time. In this application, a binary image slice of the filament is created and the silhouette of the filament is fabricated from it. Silhouettes are used to recognize the nonuniformity in the pitch of the lamp’s wiring. This system is used by General Electric Corporation.
  • • Automatic surface inspection systems: In metal industries, it is essential to detect surface flaws. For instance, it is essential to detect any kind of aberration on the rolled metal surface in hot or cold rolling mills in a steel plant. Image processing techniques, such as texture identification, edge detection, fractal analysis, etc., are used for detection.
  • • Faulty component identification: This application identifies the faulty components in electronic or electromechanical systems. A higher amount of thermal energy is generated by faulty components. Infrared images are produced from the distribution of thermal energy in the assembly. The faulty components can be identified by analyzing the infrared images.
  • 5.1.3 Infrared Imaging
  • Introduction

What we typically think of as “light” is really electromagnetic radiation that our eyes can see. We perceive the world in the colors of the rainbow, red through violet. But these colors of light are actually a very small portion of the electromagnetic spectrum.

Our eyes are capable of seeing only a very narrow region of the electromagnetic spectrum, and we need special instruments to extend our vision beyond the limitations of the unaided eye. As the energy of light changes, so too does its interaction with matter. Materials that are opaque at one wavelength may be transparent at another. A familiar example of this phenomenon is the penetration of soft tissue by X-rays. What is opaque to visible light becomes transparent to reveal the bones within.

Extending human vision with electronic imaging is one of the most powerful techniques available to science and industry, particularly when it enables us to see light in the infrared (IR) portion of the spectrum. IR means “below red,” as IR light has less energy than red light. We typically describe light energy in terms of wavelength, and as the energy of light decreases, its wavelength gets longer. IR light, having less energy than visible light, has a correspondingly longer wavelength. The IR portion of the spectrum has wavelengths ranging from 1 to 15 pm or about 2 to 30 times longer wavelengths (and 2-30 times less energy) than visible light.

IR light is invisible to the unaided eye but can be felt as heat on our skin. Warm objects emit IR light, and the hotter the object, the shorter the wavelength of IR light emitted. This IR “glow” enables rescue workers equipped with longwave IR sensors to locate a lost person in a deep forest in total darkness, for example. IR light can penetrate smoke and fog better than visible light, revealing objects that are normally obscured. It can also be used to detect the presence of excess heat or cold in a piece of machinery or a chemical reaction (Janos Technology, 2019). What Is Infrared (IR) Imaging?

IR imaging is a technique of capturing the IR light from objects and converting it into visible images interpretable by a human eye.

The IR region is spread across the 10-I00pm wavelength in the electromagnetic region which can be distributed into three bands: the near-IR region from 0.7 to 1.3 pm; the mid-IR region from 1.3 to 3 pm; the thermal-IR region in the remaining part of the band. While the first two are used in general electronic applications like remote control and illumination IR photography, thermal IR is used in thermal imaging. The main difference is that the first two are used in reflective type of applications, while thermal IR is emanated from the object, not reflected by it.

At absolute zero, perfect order is believed to exist in the atomic structure, with no collisions and minimal entropy. Any object above the absolute zero temperature has atomic chaos and collisions, resulting in thermal energy being radiated, most of which falls in the IR band. If these radiations can be detected by some means, objects can be visualized through their radiation patterns without the need of an optical source. This forms the basis of IR imaging (Thakur, 2011). Types of Infrared Imaging

Illumination-based imaging: single lens reflex (SLRs) and digital cameras featuring IR night vision often rely on the basic fact that the charge-coupled devices (CCDs) and complementary metal-oxide semiconductor (CMOS) sensors used in them are sensitive to the near-IR region which comprises the nonthermal part. Thus, a nearby source of IR illumination during the night time or the sun during the day time serves as the primary source of IR radiation which is then reflected in varying degrees by the object being photographed, producing an IR image.

IR filters block all other forms of visible light from reaching the film. Very interesting in-camera effects with dream-like or lurid coloring, called the Wood Effect after the IR photography pioneer Robert W. Wood, appear because of the reflection of IR rays from foliage. The sources of illumination can be incandescent lamps with an IR filter in front of them. LED illuminators, or laser-type illuminators based on laser diodes (Thakur, 2011) (Figure 5.6).

Thermal imaging: The wider part of the IR band comprises thermal IR, which is emitted from almost everything above absolute zero. Thus, it is important in IR imaging. This class of IR imaging includes the following steps (Thakur, 2011):

  • 1. The IR light emanating from objects in the scene is focused by means of a special lens.
  • 2. A phased array of IR detectors scans the light to create a detailed temperature pattern called a thermogram.
Illumination-based thermal imaging (Thakur, 2011)

FIGURE 5.6 Illumination-based thermal imaging (Thakur, 2011).

  • 3. This thermogram is converted into electrical impulses which are fed into a dedicated signal processing chip to convert the electrical data into a format suitable for viewing purposes.
  • 4. This information is sent to the display unit where it appears as an image.
  • Data Processing of Infrared Thermography (IRT)

With the recent development of advanced excitation technologies, a new research line is gaining ground. This new research deals with data processing algorithms, which are used not only to improve the level of detection of the IR thermography (IRT) technology but also to characterize the detected defects to automate the inspection process (Ibarra-Castanedo et al., 2004). Important data processing techniques used in IRT include statistical moments, principal components analysis, dynamic thermal tomography (DTT), polynomial fit and derivatives, and pulsed phase thermography (PPT) (Usamentiaga et ah, 2014). Statistical Moments

Data obtained with IRT are a sequence of numerical values. These numerical values can be treated by statistical functions describing certain behaviors to detect significant changes between some values and others. Different statistical moments offer different results (Usamentiaga et ah, 2014).

The term moment is used to represent the expected values of the different powers from a random variable (Madruga, Ibarra-Castanedo, Conde, Lopez-Higuera, & Maldague, 2010) and to determine the degree to which data fit a given type of distribution. Skewness is the third standardized statistical moment from a distribution. The mathematical formula used to calculate skewness can be seen in Equation (5.1), where p is the mean value and a is the standard deviation of the random variable x. E is the mathematical expectancy, defined as Equation (5.2), where n is the number of data points (Usamentiaga et ah, 2014).

Using skewness, it is possible to measure the asymmetry of the probability distribution from a random variable of real parameters. This method is an appropriate processing technique for IR images; the application of this statistical method is only slightly affected by nonuniform heating or by the shape of the surface of the tested material (Madruga, Ibarra-Castanedo, Conde, Lopez-Higuera, & Maldague, 2008).

Kurtosis is the fourth standardized statistical moment from a distribution. It is generally defined as a measure that reflects the degree to which a distribution has a peak shape (Albendea, Madruga, Cobo, & Lopez-Higueral, 2010). In particular, kurtosis provides information about the height of the distribution in relation to the value of the standard deviation. Mathematically, it is defined as Equation (5.3) (Usamentiaga et al., 2014.):

The temperature distribution of a defect in an image has a kurtosis value that differs from an area without defects, depending on the thermal diffusivity of the defect area. Therefore, it is possible to estimate the kurtosis values for every pixel in the image sequence and to obtain a unique image showing these values: a kurtogram. The kurtogram gives an indication of the location of defects on the subsurface and their thermal diffusion (Madruga, Ibarra-Castanedo, Conde, Maldague, & Lopez-Higuera, 2009). Principal Component Analysis

Principal component analysis (PCA) is a statistical technique to synthesize information. Its objectives are, first, to reduce the number of variables in a dataset, losing the least amount of information possible, and, second, to highlight the differences and similarities in data (Usamentiaga et ah, 2014).

Smith (2002) describes the steps for applying this method to a set of data, and Cramer and Winfree (2005) show the effectiveness of this method for reducing thermographic data.

Processing based on PCA uses a set of statistic orthogonal functions, known as empirical orthogonal functions (EOFs), to decompose the thermal sequence of the surface temperature variation of a specimen obtained after a pulsed active thermography test of its principal components. In this way, data can be reduced without deleting useful information. These principal components are obtained from the singular value decomposition (SVD) of the thermic temporary data matrices. The method is called principal component thermography (PCT) (Usamentiaga et al., 2014).

DTT means “layering” a test specimen in different layers corresponding to different depths to observe the distribution of the thermal properties. This technique is based on the analysis of the surface temperature evolution after applying an external thermal excitation (Swiderski, 2008).

This algorithm is applied to a thermal image sequence where the evolution of the temperature is observed over time. To utilize this technique, the time evolution of each pixel in the image is fitted with different order polynomials. While the low-order polynomials describe the behavior of the areas without defects, higher-order polynomials describe the variations of the defects. The differential for each pixel is expressed as Equation (5.4) (Usamentiaga et al., 2014):

The DTT algorithm returns two different images: the maxigram, where the maximum values of AT are observed, and the timegram, which indicates the time at which these maximum values take place (Usamentiaga et al., 2014). Pulsed Phase Thermography

PPT is based on the phase calculation of a sequence of images, in which the time history of each pixel describes the thermal propagation of an external energy excitation (Maldague, Galmiche, & Ziadi, 2002; Ibarra-Castanedo & Maldague, 2004). PPT transforms the image sequence into the frequency domain using the DFT, as seen in Equation (5.5), where i is the imaginary number, n is the frequency increment, and Re„ and Im„ are the real and imaginary parts of the DFT (Usamentiaga et al., 2014.):

Finally, the phase of each pixel is defined as Equation (5.6) (Usamentiaga et al., 2014):

This technique combines the advantages of modulated and pulse IRT (Maldague & Marinetti, 1996). Applications

Originally developed to augment military capabilities, IR imaging has found use in numerous industrial and medical applications and is a seasoned tool in military, astronomy, and remote sensing. It offers an alternative to security officials who no longer have to frisk suspects manually for dangerous weapons. Meteorology departments have been able to develop eyes that can observe the earth’s atmosphere 24-7 without depending on the sun for illumination. In addition, IR related to the temperatures of clouds offers insights into the height and moisture content of the cloud cover and into climatic changes (Thakur, 2011) (Figure 5.7).

The Spitzer Space Telescope launched a few years ago to take deep space images is based on the IR band. Near-IR cameras using active illumination from an IR source have been used in photography and are becoming a standard add-on feature in modern digital cameras. They also comprise the generation-0 of night vision systems. IR photography remains a popular form of photography, as it produces images with surreal colors and details otherwise oblivious to the human eye.

Thermal IR has been the driving technology behind the first-, second-, third-, and fourth-generation night vision devices. It helps us see when there is no source of light, even penetrating layers of fog or smoke. Firefighting departments have endorsed thermal vision gear as indispensable for any firefighter to locate survivors beneath rubble.

In the medical field, IR imaging has been successfully deployed for oncology, respiratory problems, vascular disorders, skeletal diseases and tissue viability, cancer detection, etc., under the common name DITI (for digital IR thermal imaging). Forward-looking IR or FLIR helped detect the spread of HINI Swine Flu in airports in 2009.

IR imaging is an indispensable tool in industry for nondestructive testing of faults in materials, weld verification, fault detection, etc. A noncontact testing method is more desirable than any contact testing procedure (Thakur, 2011).

5.1.4 Ultraviolet (UV) Imaging

UV imaging has a wide variety of scientific, industrial, and medical applications, for instance, in forensics (Krauss & Warlen, 1985), industrial fault inspection (Chen, Wang, & Yu, 2008), astronomy, skin condition monitoring (Fulton, 1997), and remote sensing (McElhoe & Conner, 1986; Smekens, Burton, & Clarke, 2015). To date, scientific-grade UV cameras, which have elevated quantum efficiencies in this spectral region, have been applied in this context. However, these systems are relatively expensive (typical units cost thousands of dollars) and can be power intensive, since they may incorporate thermoelectric cooling. Although these units may provide high signal-to-noise ratios, a lower price point solution could expedite more widespread implementation of UV imaging (Wilkes et al„ 2016).

It is important to distinguish between UV light and UV-fluorescence imaging. Although both use UV lighting, they are entirely different. UV imaging starts by passing the emission of a UV-emitting LED, lamp, or diode or looking at a subject illuminated with UV light that is reflected off the item being inspected. The reflected-UV light is then captured by the camera. The wavelength of the UV light is not converted or shifted in this process.

While UV-fluorescence imaging also requires illuminating a surface with UV light, the fluorescent material absorbs the UV light and electrons are released, causing the material to radiate light at a longer wavelength. The light emitted during this process is usually in the visible range, and in industrial applications, it will usually be blue light. In this type of reaction, light energy in will always exceed light energy out.

UV imaging inspection isn’t used often in machine vision. However, as UV-sensitive cameras and UV-emitting light sources, particularly LED lighting, have become widely available and less costly, new applications are emerging. Monochromatic UV sources, such as lasers and LEDs, are desirable in machine vision applications because when paired with appropriate bandpass filters, camera optics don’t need to be achromatic, significantly lowering cost.

Images formed with monochromatic illumination are always sharper than images made with broader UV sources, and capability naturally increases as the wavelength used to image the item being inspected is shortened. With UV illumination, smaller features can often be formed and detected more easily and accurately. This is why monochromatic UV (excimer) lasers and optical imaging are used in producing integrated circuits.

The UV band is broad, spanning a wavelength range from 10 (below this are X-ray wavelengths) to 400 nm (above this are visible wavelengths). A system’s cameras, optics, filtering, and illumination must be carefully selected according to the UV range being imaged. Otherwise, because of internal camera filtering and the optical lenses being used, most visibly-optimized CCDs and CCD cameras and lens systems will block all of the deep-UV (DUV) and most of the near-UV spectrum.

The near-UV range, between 290 and 400nm, is most commonly used in industrial imaging applications. This range is typically subdivided into UV-A (320-400nm) and UV-B (290-320 nm) radiation. Standard optical glasses absorb light and cannot be used for imaging in the region below 290nm, known as the UV-C or DUV portion of the spectrum. Instead, lenses incorporating fused silica, fused quartz, or calcium fluoride are designed for these applications. Below 180-190 nm, air absorbs UV light. This UV portion is often referred to as the vacuum UV (VUV), since imaging can only take place in a very high vacuum or nitrogen environment (MIDOPT) (Figure 5.8).

Because UV wavelengths are shorter and easily scattered, some of the most common applications for true UV imaging include detecting scratches and digs on polished or highly specular surfaces. When dark field illumination is used to enhance the scattering effect, scratches that aren’t apparent in a visible

Example of UV imaging (MIDOPT)

FIGURE 5.8 Example of UV imaging (MIDOPT).

image can become easier to image in UV. UV photolithography processes are used in the production of computer chips. Patterns are optically imaged onto a silicon wafer covered with a film of UV light- sensitive material (photoresist). The photoresist is then further processed to create the actual electronic circuits on the silicon.

Another application involving reflected-UV light is detecting surface contamination. Since UV light tends to be absorbed by organic materials, traces of oil or grease can sometimes be detected on surfaces, particularly in the DUV. Petroleum-based products can also appear differently in UV light, and this can be useful in identifying the nature and source of oil spills. It is sometimes possible under UV illumination to distinguish different paints or finishes if repairs have been made to antiques or other valuable objects (MIDOPT). Ultraviolet (UV) Imaging Sensors

As interest in digital reflected-UV imaging has grown, a small number of camera manufacturers and lens designers have specifically addressed the demands of this market and have produced off-the-shelf UV cameras based on silicon CCDs. UV sensitivity is often overlooked by CCD and CMOS camera manufacturers, who usually publish spectral response curves for their sensors that stop at 400nm, the edge of human vision and the beginning of the near-UV region of the spectrum. This omission has sometimes made it difficult for scientists and engineers to select a suitable camera for reflected-UV imaging applications. In fact, many commercial visible-light CCD and CMOS cameras have UV-blocking layers incorporated into the optical path to prevent undesirable chromatic aberrations in the image, making them virtually useless for UV imaging.

Most of the emerging breed of UV-specific cameras are based on thinned CCD arrays and are packaged for the machine vision and industrial inspection market. The thinning process removes silicon material that prevents UV radiation from reaching the active layer in the detectors. This thinning process shortens the cut-on wavelength of the sensor to as low as ~200nm. Some newer CCDs are being built with ITO (indium tin oxide) instead of polysilicon gates. The ITO material is more transparent in the near-UV band and allows shorter wavelengths of light to be detected, as with thinning, but with a lower manufacturing cost.

As mentioned earlier, silicon CCD cameras are more sensitive in the visible and near-IR bands than in the UV band, even with thinning, and the UV imaging system designer must carefully control the spectrum of light that reaches the sensor. Camera filters which pass near-UV light while blocking visible and near-IR are always required, unless the illumination itself is purely UV.

Another method for enhancing a silicon sensor’s UV response is a wave shifting coating such as metachrome. These fluorescent materials are applied directly onto the CCD surface during manufacture. Ultraviolet A (UVA) light that would normally be absorbed in the silicon before generating carriers is converted to visible light, which is then easily detected by the CCD.

UV image converters and image intensifies are also commercially available and are especially useful when the UV signal level is very low. These devices convert UV photons into electrons using an electrically charged photocathode on the front of an evacuated image tube. The photoelectrons are converted to visible light via a green phosphor on the tube’s anode and can then be directly viewed by the operator. Relay lenses can be used to reimage the viewing screen onto a CCD or CMOS camera for applications demanding a video signal (Richards, 2006a). Ultraviolet (UV) Imaging Applications

As with IR imaging, the applications for reflected-U V imaging are diverse. As more U V cameras become available to commercial customers, the list of practical applications will certainly grow. One way to look at UV imaging is that it is all about absorption. Many common materials (especially those based on organic molecules) strongly absorb near-UV light due to electronic transitions. Changes or modifications to the surface of the material can affect this UV absorption, making the changes easier to detect. In contrast, near-IR imaging applications are often all about transmission. Many materials that are opaque in the visible band are actually quite transparent in the near-IR band. These materials include ink, paint, fabric dye, silicon wafers, thin paper, and plastic. Many practical near-IR applications, thus, require that something be rendered transparent. The near-IR and near-UV bands seem to be complementary in nature in terms of imaging applications.

Some of the most interesting applications for reflected-UV imaging are the following (Richards, 2006a):

  • • Imaging surface texture not apparent to visible-light imaging.
  • • Detecting changes in painted or coated surfaces due to variances in UV reflectance.
  • • Imaging UV lasers, LEDs, and other UV light sources.
  • • Detecting sun damage, bite marks, and bruises on skin.
  • • Evaluating the efficacy of sunblock and the uniformity of its application to skin.
  • • Detecting trace evidence not apparent to fluorescence imaging, IR, or visible light.
  • • Detecting both natural and manmade white camouflage in snowy conditions.
  • • Visualizing markings on flowers and butterflies that are only visible in the near-UV band.
  • • Visualizing repairs, cracks, and damage to teeth.
  • Ultraviolet (UV) Imaging Opens New Applications

Industrial machine vision has traditionally centered on visible-light imaging cameras and visible-light illumination. The simplest machine-vision applications are replacements for human workers, who see in the portion of the electromagnetic spectrum in wavelengths between 400 and 750 nm. A human-replacement machine-vision system might consist of a monochrome video camera combined with a software algorithm to detect the presence or absence of the cap on a tube of toothpaste and the degree to which it has been tightened. For a system like this, the lighting can be provided by simple tungsten lamps.

The machine-vision industry has done very well with this sort of application, using both color and monochrome video cameras that closely match the spectral response of the human eye. Yet there is a great deal of light in the electromagnetic spectrum that the human eye cannot see. This invisible light often carries significant amounts of interesting information (Richards, 2006b). Reflected-UV Imaging

UV imaging has begun to emerge as an inspection modality for some industrial processes. Relative to X-ray or IR machine vision, UV machine vision is still in its infancy, but the field is growing, as commercial UV hardware drops in price and increases in diversity. New applications are emerging, as more users integrate off-the-shelf UV cameras into production environments and experiment with them.

UV light interacts with materials in a unique way, enabling features and characteristics to be observed that are difficult to detect by other methods. UV light tends to be strongly absorbed by many materials, making it possible to visualize the surface topology of an object without the light penetrating into the interior. Because of its short wavelength, it tends to be scattered by surface features that are not apparent at longer wavelengths. Thus, even smaller features can be resolved or detected via UV light scattered off them.

It is important to distinguish between reflected-UV imaging and UV-fluorescence imaging. They are different techniques with different characteristics, but because they both involve UV light, they are often confused. Reflected-UV imaging starts with the illumination of a surface with UV light. The UV light is reflected or scattered and is then imaged by a camera sensitive in the UV band. The wavelength of the UV light is not shifted during the process.

UV-fluorescence imaging also starts with active illumination of a surface with U V light, but the detected signal is in the visible or IR band. The fluorescent material absorbs the UV excitation, then reradiates at a longer wavelength. The emitted fluorescence is not reflected light; it tends to be a diffuse emission.

The UV band is broad, spanning the range of wavelengths, from the start of the X-ray band at 10 nm to the edge of human visual sensitivity at 400 nm. There are two main classes of industrial UV imaging applications, involving two different bands of the UV spectrum, and the cameras, optics, and illumination must be selected accordingly. As discussed above, the band of the spectrum between 300 and 400 nm is commonly known as the near-UV band. It is divided into the UV-A and UV-B sub-bands. Below 300 nm, standard optical glass becomes very absorbing. This region of the spectrum is known as the DUV band, or alternatively, the UV-C band. Machine-vision systems in the DUV band generally operate around 250-280 nm.

One of the most common applications for reflected-UV imaging is the detection of scratches in a surface. The shorter UV wavelengths tend to scatter more strongly off surface features than the visible or near-IR bands. So, for example, scratches not apparent in a visible image may be visible only to a person with excellent eyesight with great difficulty when visible light strikes at a very oblique angle. In contrast, in a UV image taken at 365 nm, the scratches can be seen quite easily (see Figure 5.9) (Richards, 2006b).

As a result, UV imaging enables automated systems to detect scratches and digs on optical surfaces such as lenses or windows. In the semiconductor industry, photolithography requires inspection of photomasks with very fine lines and features to find defects that may be submicron in size. Confocal microscopes operating in the DUV band at 248 or 266nm (laser wavelengths that can be generated by krypton fluoride and frequency-quadrupled Nd:YAG lasers, respectively) can be used to image these features with much greater clarity than in the visible band and can be used to find tiny defects in the silicon wafer starting material. Detection of these defects early in the production process can greatly improve yields and reduce waste.

Other reflected-UV applications involve the detection of small amounts of surface contamination. Since UV light tends to be absorbed by organic materials, traces of oil or grease are sometimes detectable on many surfaces, particularly in the DUV band (see Figure 5.10). It is also possible to distinguish new paint from old in some situations, even when the two types of painted surfaces look identical in the visible band (see Figure 5.11) (Richards, 2006b).

CD jewel case is imaged in both visible (a) and 365-nm UV lighting (b). Scratches are not apparent in the visible image but are clear in the UV image (Richards, 2006b)

FIGURE 5.9 CD jewel case is imaged in both visible (a) and 365-nm UV lighting (b). Scratches are not apparent in the visible image but are clear in the UV image (Richards, 2006b).

Images of a metal cabinet show brown painted trim containing an oil stain. The stain shows as a darkened area using 365-nm UV imaging (b) but is not apparent in the visible image (a) (Richards, 2006b)

FIGURE 5.10 Images of a metal cabinet show brown painted trim containing an oil stain. The stain shows as a darkened area using 365-nm UV imaging (b) but is not apparent in the visible image (a) (Richards, 2006b).

(a) With UV imaging, new paint can be distinguished from old in some situations, such as a white Toyota

FIGURE 5.11 (a) With UV imaging, new paint can be distinguished from old in some situations, such as a white Toyota

Prius that has had the driver’s side fender replaced after an accident, (b) The new paint is relatively unoxidized and has a UV-inhibiting clearcoat, and so the fender looks darker in the 320-400-nm UV band relative to the older paint on the rest of the car. UV machine vision can ensure that clearcoat is uniformly applied (Richards, 2006b). Reflected-UV Imaging Applications in Forensics

It is well known that UV light has properties that make it a very powerful investigative tool for forensics, particularly because it makes many substances fluoresce. Less well known is the power of reflected-UV imaging to reveal hidden evidence. It does this for several reasons (Richards, 2010):

  • • Absorption: UV light is highly absorbed by many commonly encountered organic materials, yet is reflected by many inorganic materials like stone and metal. If these organic materials are on a surface with higher UV reflectance, the substances will often stand out more strongly than visible-light or near-IR images. The reverse is true as well—traces of inorganic materials like salt stand out on a dark organic surface like a wooden table.
  • • Lack of penetration: UV light does not penetrate even very thin layers of materials, making surface topology more apparent, since normally translucent surfaces appear opaque. The high energy of UV photons makes them interact strongly with the electrons in atoms and molecules. Many materials look very dark when imaged with UV light.
  • • Highly scattered UV waves: UV light waves have a short wavelength, so they are scattered much more readily by small surface imperfections on a smooth surface than either visible or near-IR light. Scratches and dust are much more apparent; therefore, the optics industry uses UV imaging to inspect lens surfaces, for example. Some of the texture imaging can be accomplished by raking-illuminated visible-light photography, though UV has advantages over raking light.

These three properties of the interaction between materials and UV light make reflected-UV imaging very useful for certain applications in forensic imaging. Three primary applications that are well documented in the literature are (Richards, 2010):

  • • Imaging of bite marks and other pattern injuries on skin.
  • • Imaging of shoeprints on surfaces where visible-light contrast is low.
  • • Imaging of latent fingerprints.

The latter application requires an imaging system that works in the shortwave UV band, unless the fingerprints are made while the fingers are coated with a substance that absorbs near-UV light, like sunscreen.

Two other forensic applications that are less well documented are the following (Richards, 2010):

  • • Imaging traces of certain substances on certain classes of surfaces.
  • • Imaging changes in surface texture on smooth surfaces caused by physical contact.

Forensic investigators tend to image what they know is already there, because of the difficulties inherent in reflected-UV imaging with film. Thus, these last two applications have historically received very little attention, because the presence of the anomaly may only be apparent in the UV band. Unless the photographer has a means of scanning the scene with a UV imaging scope or video camera, he or she might never know to photograph in a certain area of a crime scene with reflected UV to discover invisible forensic evidence.

In some cases, traces of materials and changes in surface texture can be imaged with raking light illumination or by imaging the surface at a highly oblique angle. This is not always possible due to geometric constraints. In some cases, UV imaging works better than raking light imaging, especially in situations where the surface anomaly is subtle (Richards, 2010).

  • [1] Georeferencing: Transformation of the point cloud (usually from WGS84) to the local coordinate system. • Noise removal: Filtration of points not reflected from the surface (e.g., reflected from a bird orbelow the ground) because of multipath reflection or faulty time measurement. • Coarse classification: Classification of ground points, above ground points, water etc.; alsopoint density adjustment, interpolation. • Modeling: DSM/digital elevation model (DEM) generation, feature extraction (segmentationand/or classification).
< Prev   CONTENTS   Source   Next >