Laser Detection And Ranging System Monitoring

Radio detection and ranging (RADAR) is the process of transmitting, receiving, detecting and processing an electromagnetic wave reflected from a target. RADAR techniques have expanded into many modern applications areas and have also moved into the optical portion of the electromagnetic spectrum. Using lasers as optical transmission sources, a specific category of optical RADAR systems has been proposed called laser radar or laser detection and ranging (LADAR) (Al-Temeemy and Spencer, 2014). LADAR are three-dimensional (3D) spatial measurement systems. The power of these systems lies in the inherent 3D nature of the data that are produced (Al-Temeemy and Spencer, 2015a). They are an attractive alternative to RADAR systems, because through the use of optical laser wavelengths (which are shorter than RADAR wavelengths), they provide very high-resolution three-dimensional images. In addition, light velocity allows LADAR systems to take numerous measurements per second (Al-Temeemy, 2017). LADAR images are created by scanning a scene with laser beams; the scanning angles and the return time for these beams are used to calculate a 3D-range image, which in turn represents the spatial location of the intersection of the laser beam with the scanned scene (Al-Temeemy, 2017). LADAR systems have diverse applications (Al-Temeemy and Spencer, 2015a), including quality control, surveying, mapping, terrain characterisation, safety monitoring, disaster reconnaissance etc. (Al-Temeemy and Spencer, 2015b). A LADAR prototype system with its laser ranging and scanning unit is shown in Figure 15.1. (Al-Temeemy and Spencer, 2011).

LADAR prototype scanning model car

FIGURE 15.1 LADAR prototype scanning model car.

Different methods were developed for processing the incoming information from LADAR monitoring systems. Some methods describe LADAR images based on shape histogram methods which describe LADAR images as histograms of point fractions. These methods generally suffer from the presence of noise and require a high number of features. This increases the recognition time and requires more memory to store the features (Al-Temeemy and Spencer, 2014).

Others use robust descriptors for LADAR images such as moment invariants. Their general disadvantage is noise sensitivity, which limits and restricts their applications (Al-Temeemy and Spencer, 2015a). Furthermore, surface properties have been used with LADAR images such as normals and regional shape. Surface normals are sensitive to noise, while the regional shape extraction approach is robust to noise, but it is computationally expensive, requires a large amount of space to store the features and has low discriminating capability (Al-Temeemy and Spencer, 2015b).

Advanced chromatic methods have been used because of their simplicity, high discrimination capability, noise insensitivity and affine (rotation, translation and scaling) invariant capability. These chromatic methods are based on extracting a robust feature that is relatively unaffected by the noise which usually disturbs LADAR measurements. This feature is the silhouette image of the 3D LADAR data from its perspective view (Al-Temeemy and Spencer, 2014, 2015a,b). As an example, Figure 15.2 shows the image addressing features. Figure 15.2a shows an image of a fighter jet being addressed by a LADAR system. Figure 15.2b shows the point cloud of this fighter from a rotated view. Figure 15.2c shows it from the LADAR perspective. Figure 15.2d shows the resultant silhouette of the fighter. While the range data (Figure 15.2b) shows significant noise, the LADAR view and its resultant silhouette shown in Figure 15.2d show a relatively smooth image. This smoothness comes from the high pointing accuracy of the scanning unit in comparison to the distortion of the range measurement (Al-Temeemy and Spencer, 2015b).

The resultant silhouette image is then processed by extracting the normalised projections (which are rotation invariant) using radon transform and then processing these projections with a special type of chromatic processors called shift- and scale-invariant spatial chromatic processors. The processors’ deployments in this type adapt their locations and widths with respect to these projections, which produces chromatic values invariant to rotation, translation and scale effects (Al-Temeemy and Spencer, 2010, 2014, 2015a,b) as shown in Figures 14.3 and 14.4 (Chapter 14).

Silhouette image generation from distorted LADAR data, (a) Fighter jet model, (b) scan data from rotated view, (c) perspective view, (d) resultant silhouette image

FIGURE 15.2 Silhouette image generation from distorted LADAR data, (a) Fighter jet model, (b) scan data from rotated view, (c) perspective view, (d) resultant silhouette image.

Methods for normalising images of a displaced, magnified and rotated object have been described in Chapter 14 (Figures 14.3,14.4). These advanced chromatic methods have been evaluated with simulated LADAR data using special software called a LADAR simulator (Al-Temeemy, 2017). This simulator models each stage from the laser source to the data generation and so is able to generate simulated LADAR data through scanning 3D computer aided design (CAD) models with different artefacts such as noise, resolution, view, scaling, rotation and translation (Al-Temeemy and Spencer, 2015b).

The simulation results show' high discrimination capability for the advanced chromatic methods over the moments invariant, w'hich have also been used to benchmark the results. The results also show' constant performance during scaling, rotation and noise effects, w'hich shows the effectiveness of these methods and their robustness w'ith noise effects (Al-Temeemy and Spencer, 2015b). Advanced chromatic methods were also evaluated with real LADAR data. The experimental results show similar general behaviour compared to the simulated data, which proves the ability of the approach to process real LADAR data and provide recognition rates higher than traditional techniques like the invariant moment descriptor (Al-Temeemy and Spencer, 2015b).

These advanced chromatic methods are simple, and their discrimination capability can be easily extended by either increasing the number of spatial chromatic processors or using additional normalised projections. The small number of chromatic features for these methods means that they require less storage space and processing time (Al-Temeemy and Spencer, 2010, 2014, 2015a,b).

Multispectral Domiciliary Healthcare Monitoring

There is an increasing need for independent living at home by the elderly population (Al-Temeemy, 2018, 2019). To alleviate operational and financial difficulties w'hich result from this and to provide comfortable living conditions, new' monitoring systems are needed. Several methods have been proposed. One approach has been w'ith a system using a combination of visible and infrared light for chromatically addressing a living accommodation (Jones et al., 2008; Al-Temeemy. 2018,2019). The approach is versatile and has evolved in two major w'ays. The first was to produce a cost-effective and convenient-to-install wireless system, whilst the second sought to provide more sophisticated adaptation to overcome the impact of excessively noisy environments and to improve the activity recognition of vulnerable people.

The basic system can be used in one of two modes. The first mode is based upon monitoring changes in optical chromaticity of various locations in a room and combining this with changes in infrared signals. The second mode is a more sophisticated adaptation based upon processing multispectral data (visible and infrared parts of the spectrum) spatially and temporary to identify the human silhouette, which provides more detailed monitoring.

Wireless Chromatic System

The basis of the monitoring system was a combination of polychromatic visible light with three passive infrared (PIR) sensors for chromatic monitoring in an enclosed living environment. The optical monitor produced a two-dimensional optical image of the living environment. In parallel, this environment was addressed by three infrared signals which covered three overlapping floor areas, producing seven distinguishable areas. Such systems have been successfully installed and tested at care for the elderly homes in the United Kingdom (Driver and Busfield, 2005).

Recent system hardware setup consists of a sensing head and personal computer to analyse incoming sensing head information from the visible and infrared parts of the spectrum (Smith, 2019). The sensing head comprises a visible band image sensor, three passive infrared sensors and a microcontroller (for collecting and transferring the monitoring data wirelessly) with the required electronics (Al-Temeemy, 2019). The sensing head for the monitoring system with its internal structure is shown in Figure 15.3.

Recent developments of the system involved the optical output signals being processed locally with optical chromaticity, whilst the PIR output signals were also processed locally but via spatial chromaticity. The processing was undertaken within the small monitoring unit shown in Figure 15.3. The optical and spatial chromaticities were then processed in combination to provide information not only about movement but also particular location conditions (e.g., closed/opened doors, television on/off etc.) and human activities. The values of the basic chromatic parameters were transferred wirelessly to a central control hub, the procedure having reduced the amount of data needing to be transferred compared with the previous hardwired system (Jones et al., 2008; Smith, 2019). As a result of the chromatic data compression, a central processing unit was developed which was capable of supporting 16 sensor head units at different ceiling locations in different rooms, all operating in parallel and continuously 24/7.

Sensing head and internal structure of the monitoring system

FIGURE 15.3 Sensing head and internal structure of the monitoring system.

Advanced Chromatic Information Processing

The advanced monitoring of vulnerable people involves using a chromatic monitoring system (Section 5.2.3.1) to acquire optical and infrared signals, followed by the use of two new monitoring stages to recognise people’s activity. The first stage is for identifying the silhouette image of person to be monitored, while the second stage processes this image to recognise the living activity. Extracting and identifying the silhouette image is achieved by using a foreground detector such as ViBe (Al-Temeemy, 2018). Identifying silhouettes in noisy environments with illumination changes and dynamic backgrounds is difficult (Al-Temeemy, 2019). However, with the present system, both sensor types (visible and IR) will respond when the human moves across a specific location such as that shown in Figure 15.4 (top). Thus, the developed system generates the spatial-temporal probability of detection from the PIR sensors and then correlates this with the foreground output generated from the visible sensor output (Figure 15.4 [top]) using the ViBe detector. The result of the correlation is then

Human silhouette identification with a combination of optical and IR sensing

FIGURE 15.4 Human silhouette identification with a combination of optical and IR sensing.

processed by spatial chromatic processors (Figure 15.4 [bottom left]) to locate the silhouette region that corresponds to the monitored person and extract it from the other regions (Figure 15.4 [bottom]).

The identified silhouette image is then processed by different types of invariant spatial chromatic processors to recognise the living activity. Experimental data sets have been used to evaluate the performance of the chromatic processing and silhouette identification methods. The results show that the use of chromatic methodology can efficiently deal with events that disturb the monitoring systems. They also show better performance in comparison to traditional methods when describing daily living activity (Al-Temeemy, 2018, 2019).

Chromatic Monitoring of Groups of Zebrafish (Danio rerio)

Fish are finding increased use as model species within a variety of biomedical and neurobiological contexts. Zebrafish (Dcmio rerio) are estimated to account for 50% of the total number of fish used (UK Home Office Report, 2019). The desirability of zebrafish in experimentation has recently increased due to their rapid development, reproductive success and high genetic homology to humans (80%-85%) (Thomson et al„ 2019). Researchers are required to prevent any negative states such as pain when using experimental animals to optimise their welfare. This requires a convenient method to detect abnormal behaviour, which is difficult to achieve because the identification of abnormal behaviour in one or a few fish within a larger group is challenging, and thus only information on individual zebrafish exists (Thomson et ah, 2019).

Analysis of video frames from an electronic camera has provided a means for developing intelligent chromatic software (chromatic fish analyser; CFA), to monitor the overall average behaviour in a group of zebrafish, some with their fins clipped and subjected to pain-relieving drugs. Chromatic fish analysis involved addressing a tank containing the fish with a video camera inclined, as shown in Figure 15.5.

The CFA involves calculating the activity of the fish from the video frames, chromatically analysing the activity images and quantifying the results using chromatic maps (Figure 15.6).

The three analysis procedures were as follows;

• A typical video image of zebrafish in a tank is shown in Figure 15.6. An image of the fish activity is obtained from such input video frames by finding the absolute difference between successive recorded video frames and then applying a hard-thresholding technique (based on the selected threshold value) to enhance the resultant difference (Thomson et al., 2019).

Chromatic monitoring of zebrafish tanks using USB video camera

FIGURE 15.5 Chromatic monitoring of zebrafish tanks using USB video camera.

Block diagram for chromatic monitoring of zebrafish

FIGURE 15.6 Block diagram for chromatic monitoring of zebrafish.

  • • Chromatic addressing involves applying two sets of spatial chromatic processors on each resultant activity image (one vertically and the other horizontally deployed; Figure 15.6). The spatial response of these processors is chosen to provide uniform sensitivity across the entire video frame (Al-Temeemy, 2018, 2019).
  • • The processors’ outputs for each image are transformed into horizontal and vertical chromatic parameters of hue, saturation and lightness which reflect the behaviour of the group of zebrafish for that image. Hue indicates the dominant location and height of the group, saturation the spread of the group and lightness the activity level. (Thomson et al., 2019).

The 3D representations of the chromatic parameters for each image frame are shown in Figure

15.7 for H(vertical): H(horizontal): S as clouds of points.

Example of 3D representation of chromatic dominant location H along vertical (v) versus dominant location along horizontal dimension (h) versus overall spread S

FIGURE 15.7 Example of 3D representation of chromatic dominant location H along vertical (v) versus dominant location along horizontal dimension (h) versus overall spread S.

The chromatic parameter results show that changes in the vertical hue indicated that all fin- clipped zebrafish were closer to the bottom of the tank relative to pretreatment, and their position remained lower for the rest of the experiment; this was not observed in control groups and was alleviated in those zebrafish treated w'ith lidocaine. Saturation (clustering) and lightness alterations indicated that zebrafish groups had reduced activity after receiving the fin clip. Lidocaine was effective in preventing the behavioural changes after treatment. These results proved that the chromatic fish analyser is powerful enough to identify significant changes in behaviour taken directly from monitoring data (Thomson et al„ 2019).

Overview and Summary

Examples of the deployment of multidimensional monitoring in three-dimensional space have been considered. Three different applications are described: LADAR-based monitoring, care of the elderly monitoring and fish tank monitoring. In each case, the chromatic approach has provided an efficient, cost-effective and convenient-to-use approach with good levels of performance compared with other monitoring methods. The method is flexible and has potential for extrapolation to other space domain monitoring applications.

References

Al-Temeemy, A. A. (2017) The development of a 3D LADAR simulator based on a fast target impulse response generation approach. 3D Research. Springer, 8, 31. ISSN 2092-6731, doi: 10.1007/sl3319-017-0142-y.

Al-Temeemy, A. A. (2018) Human region segmentation and description methods for domiciliary healthcare monitoring using chromatic methodology. J Electron Imaging, SP1E 27, 27-14, doi: 10.1117/1. JEI.27.2.023005.

Al-Temeemy. A. A. (2019) Multispectral imaging: Monitoring vulnerable people. Optik-International Journal for Light and Electron Optics, Elsevier, 180, 469-483, doi: 10.1016/j.ijleo.2018.11.042.

Al-Temeemy, A. A. and Spencer. J. W. (2010) Invariant Spatial Chromatic Processors for Region Image Description. EEE International Conference on Imaging Systems and Techniques (1ST), Thessaloniki, Greece. 421-425.

Al-Temeemy, A. A. and Spencer, J. W. (2011) Three-Dimensional LADAR Imaging System using AR-4000LV Laser Range-Finder. Proceeding of SPIE - Optical Design and Engineering IV. Marseille. France, (8167), 816721-( 1-10).

Al-Temeemy, A. A. and Spencer, J. W. (2014) Laser radar invariant spatial chromatic image descriptor. SPIE- Opt. Eng. 53(12), 123109.

Al-Temeemy, A. A. and Spencer, J. W. (2015a) Invariant chromatic descriptor for LADAR data processing. Machine Vision and Applications. 26(5). Springer, doi: 10.1007/s00138-015-0675-0.

Al-Temeemy, A. A. and Spencer, J. W. (2015b) Chromatic methodology for laser detection and ranging (LADAR) image description. Optik - International Journal for Light and Electron Optics. Elsevier, 126(23), 3894-3900.

Driver, S. and Busfield. R. (2005) Evaluation Project Report; Merton Intelligent Monitoring System (MIMS). Evaluation Project Report, Roehampton University, UK.

Home Office Report (2019). Annual Statistics of Scientific Procedures on Living Animals. Great Britain 2018. ISBN 978-1-5286-1336-1.

Jones, G. R.. Deakin. A.G., and Spencer, J. W. (2008) Chromatic Monitoring of Complex Conditions.

Smith, D. H. (2019) CIMS Internal, Report.

Thomson. J. S.. Al-Temeemy, A. A., Isted. H.. Spencer, J. W.. and Sneddon, L. U. (2019) Assessment of behaviour in groups of zebrafish (Danio rerio) using an intelligent software monitoring tool, the chromatic fish analyser. J Neurosci Methods, Elsevier, 328, 108433, doi: 10.1016/j.jneumeth.2019.10843.

 
Source
< Prev   CONTENTS   Source   Next >