Autonomous Inspection and Maintenance with Artificial Intelligence Infiltration

Artificial Intelligence Techniques Used in AVs

10.1.1 Artificial Intelligence and Autonomous Vehicles

Artificial intelligence (AI) has taken the automotive industry by storm to drive the development of Level-4 and Level-5 autonomous vehicles (AVs). Why has AI become so popular now, even though it has been around since the 1950s? Simply put, the reason for this explosion of AI is the enormous amount of data available today. With the help of connected devices and services, we are able to collect data in every industry, thus fueling the AI revolution. Nvidia unveiled its first AI computer in October 2017 to enable deep learning, computer vision, and parallel computing algorithms. AI has become an essential component of automated drive technology, and it is important to know how it works in autonomous and connected vehicles (Gadam, 2018). What Is Artificial Intelligence?

John McCarthy, a computer scientist, coined the term “Artificial Intelligence” in 1955. AI is defined as the ability of a computer program or machine to think, learn, and make decisions. In general use, the term means a machine which mimics human cognition. With AI, we are getting computer programs and machines to do what humans do. We are feeding these programs and machines with a massive amount of data that are analyzed and processed to ultimately think logically and perform human actions. The process of automating repetitive human tasks is just the tip of the AI iceberg; medical diagnostics equipment and autonomous cars have implemented AI with the objective of saving human lives (Gadam, 2018).

The roots of AI go back to classic philosophers, attempts to map and describe how humans process information and think about manipulating symbols in their environment. As technology continued to advance, engineers and philosophers were able to apply programmable digital computers to address complex mathematical reasoning. The ultimate goal of AI is to get machines to do what humans do. While computer programs can develop patterns of thought similar to humans, they require massive amounts of data that are analyzed and processed through sophisticated algorithms. The process of analyzing data helps AI mimic a logical thought process and perform human actions. AI engineers look to automate repetitive human tasks, and this ambitious goal can be applied to many industries across the world (Giarratana, 2018). How Does AI Work in Autonomous Vehicles?

AI is popular buzz word today, but how does it actually work in AVs?

Let us first look at the human perspective of driving a car with the use of sensory functions, such as vision and sound, to watch the road and the other cars on the road. When we stop at a red light or wait for a pedestrian to cross the road, we are using our memory to make quick decisions. The years of driving experience habituate us to look for the little things that we encounter often on the roads—it could be a better route to the office or just a big bump in the road.

We are building AVs that drive themselves, but we want them to drive like human drivers do. That means we need to provide these vehicles with the sensory functions, cognitive functions (memory, logical thinking, decision-making, and learning) and executive capabilities that humans use to drive vehicles. The automotive industry is continuously evolving to achieve exactly this.

According to Gartner, 250 million cars will soon be connected with each other and the infrastructure around them through various V2X (vehicle-to-everything communication) systems. As the amount of information being fed into IVI (in-vehicle infotainment) units or telematics systems grows, vehicles will be able to capture and share not only internal system status and location data but also the changes in their surroundings, all in real time. AVs are being fitted with cameras, sensors, and communication systems to enable them to generate massive amounts of data which, when applied with AI, enable them to see, hear, think, and make decisions just like human drivers do (Gadam, 2018).

  • 10.1.2 Autonomous Vehicles and Embedded Artificial Intelligence
  • Artificial Driving Intelligence: Context of Autonomous Vehicle Decisions

AVs offer the opportunity to link the benefits of the latest sensory technologies with those of AI to make driving decisions which mitigate many risks associated with human driving decisions. Indeed, the focus on the AI driving of AVs gives rise to two contrasting formulations of machine decisionality and its impacts in terms of benefits and risks to society as a whole-risk and risk mitigation. Some argue that AVs eliminate decisionality problems and hence mitigate risk associated with the human frailties of fatigue, misperception, and intoxication, along with the problematic decisions humans often make in the context of driving. This safety argument identifies the welfare benefits of machine decisions and endorses claims that AVs should be supported by policy. Conversely, others highlight the risks of new errors. Overall, there is a need to define and disseminate the benefits of AV decisional intelligence to avoid underutilization of the technology due misplaced risk perceptions (Floridi et al., 2018). Autonomous Vehicle Literature Space

AVs offer many significant societal benefits: enhancing the mobility of those who lack it, transforming urban spaces and supporting the environment, and radically improving safety and saving lives. However, since the opportunities of any substantive technology also carry both embedded and new forms of risk, any actualization of potential AV benefits also necessitates mitigation of the risks involved. Moreover, AV risk mitigation cannot be undertaken by governance regimes alone but must be a multi-stakeholder phenomenon. In this instance, traditional government and new governance models are simply outpaced, as is evident throughout the current era of digital innovation (Marchant, 2011) and highlighted by a US National Highway Traffic Safety Agency’s (NHTSA) comment (2016) that “the speed with which HAVs are advancing, combined with the complexity and novelty of these innovations, threatens to outpace the Agency’s conventional regulatory processes and capabilities.” For these reasons, intelligence technologies can only be responded to by a shared risk mitigation process wherein numerous actors cooperate. As such, the conceptualization and framing of technology in terms of meaning, benefits, and risks will ultimately determine how stakeholders engage with the technology (Cunneen, Mullins, & Murphy, 2019).

The key consideration of AV risk mitigation discussed across the literature concerns assessment of an AV’s capacity to make driving decisions. Any research which illuminates the decisionality phenomenon of AVs contributes to the multi-stakeholder risk mitigation process and promotes access to AV societal benefits. Moreover, analysis of the scope of AV decisions in terms of both benefits (risk mitigation) and potential limitations (new forms of risk) supports the dynamics of new governance relations which are both top-down and bottom-up.

Furthermore while AVs arguably afford opportunities to minimize and potentially eliminate the many risks associated with human driving, future benefits cannot be realized unless accurate and effective anticipatory risk governance research is undertaken today. A broad and immensely complex decisional context is inherent to AV technologies, such as how different governance actors and policy writers understand the decisional capacity and societal impact of AV decisions. Research should also consider the diverse ethical interpretations of AV decisions, including the need to control ethical decisions as a predetermined configuration of action or the calculation of metrics such as values and risk. Some research repeats the many abstract questions surrounding machine ethics, while other work considers meaning, conceptual confusions, and limited decisional capacity. Issues of the technical scalability of AV decisional capacity are of significance, as well, as are issues of legality and governance whose interpretation relies on an understanding and anticipation of the impacts of AV, particularly in terms of societal risk (Cunneen, Mullins, & Murphy, 2019).

Trappl (2016) underscores the need to consider the important conceptual differences between human and machine contexts of moral decisionality in the context of AVs, while Bringsjord and Sen (2016) highlight the potential confusion surrounding the differing contexts AVs’ of intelligence and ethical capacities. They also point out the need to support actors in reaching more accurate and informed choices in terms of AV policy and regulation. Millar (2016) proposes the need to investigate ethical AV decision-making while Goodall (2016) shifts the emphasis from ethics to risk. Others, such as Coeklebergh (2016), elucidate the importance of changes in relations which socially embedded technologies bring about between agents and actors (Coeklebergh, 2016). This is most evident in the consideration of key legal and ethical concepts by way of the changing human phenomenological relations with AVs (Coeklebergh, 2016). However distinct these approaches may be, they are united in their attempts to fathom the decisional relations of AI and applied applications, such as AV.

Understanding the role of AV decisionality is a complex challenge which requires careful elucidation, as the basic function of AV requires the driving intelligence to make decisions affecting human welfare and life (Lin, 2016). In fact, an AV will typically make thousands of such decisions on every trip, and global deployment will translate into millions of such decisions per day. Accordingly, it is imperative to explore the many facets of the AV decisional spectrum, not merely in terms of awareness of the limitations of AV decisionality but also in terms of the key contexts wherein different actors confuse or misunderstand the meaning of AV decisions (Cunneen, Mullins, & Murphy, 2019). Framing Artificial Intelligence and Autonomous Decisions

The technological paradigm of AVs has generated some technological disorientation, especially with respect to the decisional capacity of embodied AI products. A progression of conceptual meaning and conceptual framing begins with the research phase and culminates with how the media and society engage with the concepts relating to the technology. However, as development of innovation depends on the key metrics of governance, the media, and public perception, there is a need for closer scrutiny of how initial framing plays out in the public arena. The literature on risk amplification speaks to this issue and points to the need for debates which set a positive and inclusive tone (Pidgeon, Kasperson, & Slovic, 2003). This is true of the more general phenomenon of risk amplification and of, more discrete phenomena, such as risk dread.

Risk amplification and the fear of new and emerging technology are well documented in the literature and suggest the care required in the initial conceptual framing (Frewer, Miles, & Marsh, 2002). This aspect is taken up by Johnson and Verdicchio (2017) who maintain the need to “argue for a reframing of AI discourse that avoids the pitfalls of confusion about autonomy and instead frames AI research as what it is: the design of computational artifacts that are able to achieve a goal without having their course of action fully specified by a human programmer” (Johnson & Verdicchio, 2017). While their critical approach points to the challenges of framing embodied AI products such as AVs, they represent a minority who addresses the question.

In addition to autonomy, there are further related complex challenges specific to the framing of AI and AI decisionality. Effective ontological domains are required for individual concepts (Franklin & Ferkin, 2006), so there is a need to anticipate conceptual challenges in the initial framing and ontologies of AI products (Cunneen, Mullins, Murphy, & Gaines, 2018). This is essentially a call for temporal considerations to be captured in the concepts employed, as this field is highly dynamic, and the configuration of actors and their anticipated roles are liable to change over time (Cunneen, Mullins, & Murphy, 2019).

In short, there is a need to anticipate the societal, ethical, and legal (SEL) impacts of AVs’ decisional capacity. A critical analysis suggests there has been a failure to engage at the necessary meta-level and construct informed accurate conceptual frameworks of AV decisional capacity, as well as a failure to consider the important differences between how society and users understand human and machine decision-making in more detail. In fact, the core question of the SEL impact of AVs is yoked to the meaning framework of machine driving decisions and human driving decisions. This underlines the necessity to interrogate the conceptual framing of AV driving decisions. Without accurate SEL impact analysis, the challenges of uncertainty and risk will hinder informed research, development, and societal perception (Renn, 2008: xv). And without accurate metrics of the SEL impact, systems of governance cannot provide the mechanisms which balance the need to support innovation with the duty to assess potential risks and protect society from harms. All innovation warrants a process of analysis by which to accurately frame the legal and general principles of associated societal rights to safety, freedom, equality, privacy, and welfare.

Broadly stated, some researchers focus on AV safety ; others tackle the ethical challenges inherent in using AI. Both types of analysis frame AVs by centering the decisional capacity of vehicular driving intelligence, but they offer very different matrices of the range of decisions AI must carry out to safely traverse the human road network. One claims it is a superior driving decision capacity that will save lives; the other insists it presents a risk of limited decisional capacity which could inadvertently pose significant ethical problems (Lin, 2016). Each interpretation begins with the focus on decisions but frames the decision capacity differently, and each anticipates very different accounts of the potential SEL impacts of AV decisions and governance.

Of course, diverse perspectives and interpretations are an integral aspect of developing research and knowledge contexts, but as multiple agents and actors engage with the different frameworks around AV, the potential for inaccurate framing feeding into systems of governance is a significant concern. We have two very different accounts of decisional capacity possible SEL impacts, and proposed governance of AVs. They frame the decisional capacity in dramatically opposing ways: Proper analysis clarifies the AV decision domain, and if we are to judge by the two principal framing values of the safety argument and ethical challenge, the AV decisional framework presents a technological medium that remains conceptually obscure (Cunneen, Mullins, & Murphy, 2019).

10.1.3 Drones and Robots Are Taking over Maintenance Inspections ... and That's Not a Bad Thing

Smart devices are taking over. From drones to robot vacuums to driverless cars, we’re not only seeing them in household settings but also in industrial settings. The recent surge of interest and use is largely due to new advances in AI, which increases their autonomy, capabilities, and usefulness.

As smart devices get better and better, they are increasingly used to automate dangerous and laborious jobs, often completing the work more efficiently and with much higher precision than humans can.

Just one example of this is the use of service robots carrying out asset inspections. Let’s take a closer look at the benefits, what we can expect to see in the near future, and what it means for the technicians who used to do these jobs (Edge 4 Industry, 2018). Robots and Drones for Asset Inspections: The Benefits

For many organizations, human-mediated asset inspections can be high cost, risky, and time consuming. Though we’re still in the early days of deploying drones and robots to alleviate the challenges of traditional asset inspections, many are looking forward to the day when deploying drones and robots is the norm. Using smart devices to conduct asset inspections has numerous benefits over traditional methods (Edge 4 Industry, 2018):

1. Increase safety and decrease risks to maintenance technicians.

Drones reach places that are dangerous for human workers, such as tall structures and hazardous areas (e.g., areas with radiation or high-voltage lines). As such, they are much safer when it comes to inspecting refineries, mine areas, and pipelines. Likewise, drones can operate under adverse weather and physical conditions, such as wind, waves, and radiation, which are among the most common sources of safety risks for human workers in field service and enterprise maintenance. Of course, there are limitations to what they can withstand, but in many cases, they can surmount challenges that could be perilous or, at the very least a significant nuisance to human workers. They can capture images and video of assets that are in difficult or dangerous areas that are hardly accessible by human workers. For example, engineers have successfully guided robots into the destroyed Fukushima nuclear power plant, where radiation could be lethal to human workers. Overall, one of the leading benefits of using devices powered by AI to conduct asset inspection is that by putting the drones and robots at risk, human workers are safer and healthier.

2. Provide unprecedented richness and accuracy in data collection of an asset’s condition.

Modern drones are able to capture high resolution images and video from the assets they inspect. The rich visualizations they provide can give context and clarity regarding the assets’ conditions. For instance, they can capture images of damage and defects from multiple angles. These images can help maintenance engineers plan and execute optimal service strategies.

3. Versatile and flexible enough for a wide range of maintenance inspection tasks.

Many different types of UAVs are commercially available, including drones of many sizes that can fly in different ranges and can operate autonomously for varying amounts of time. Now more than ever before, plant owners and field service engineers have options to choose from so that they can select and deploy the UAV that is most suitable for the inspection and service tasks at their organization.

Before choosing a drone (and any applicable attachments, add-ons, and tools), enterprise maintenance professionals will need to consider their requirements, including:

  • • Flying altitude
  • • Quality of images
  • • Longest flight time
  • • Data transmission rates.

This flexibility of the available drones on the market means that there are now machines capable of performing inspections where only a few short years ago no such technology existed.

4. Make data structuring and information sharing easier.

Simply put, drones with their related attachments, software, etc. simplify the collection, organization, accessibility, sorting, sharing, processing, and interpretation of data. UAVs collect and process data in digital formats, making it much easier to store and organize the data and to produce usable reports about the inspections. Furthermore, once those data have been collected, using the right tools to make organizing, manipulating reviewing, and sharing the data easy can have benefits across maintenance stakeholders, including plant owners, maintenance and automation solution providers, maintenance engineers/technicians, and field workers.

5. Decrease maintenance equipment downtime.

Drone-based inspections can often be performed without a need to shut down systems or equipment. In some cases, machines, tools, buildings, and other systems can continue to operate while being inspected by a drone. This is not always the case with manual inspections, where some systems must be shut down in order to avoid equipment damage and injuries to human workers. Thus, UAVs lead to better OEE (Overall Equipment Efficiency) and do not disrupt services or production tasks.

6. Decrease enterprise maintenance labor and insurance costs.

There are many different ways in which UAVs contribute to cost reductions in enterprise maintenance environments. First, they reduce insurance costs for inspectors and field service personnel, who no longer engage in many dangerous tasks. Second, the actual reduction of injuries reduces costs associated with absences and healthcare. Third, enterprises save on costs for renting the equipment that supports manual inspections, such as ladders and aerial lifts.

Finally, significant cost savings stem from the fact that inspections are performed faster, i.e., more inspections can be concluded in a given timeframe.

7. Repurpose maintenance inspection UAVs for other uses.

Not only do drones make it easier and more effective to collect and analyze asset data and conduct maintenance inspections, in some circumstances, organizations may be able to leverage them for supplemental purposes. For instance, a drone might be used to capture breathtaking aerial views of facilities, buildings, plants, and their surrounding property, which can be used in marketing from social media posts to blog articles and website updates. Finding additional ways to use an inspection drone can increase the return on investment (ROI). Examples of Robots and Drones in Enterprise Maintenance

The vision of using drones and robots in inspection tasks is already materializing. During recent years, drones have been deployed in many different industries not only for asset inspections but also for security and surveillance. Most of the deployments can be found in industries, such as utilities and power generation, insurance, oil and gas, and building construction and facility management.

Take, for example, the Deloitte Maximo Center of Excellence. It has integrated an autonomous drone as part of its Enterprise Asset Management solutions. The drone solution leverages other IBM products as well, such as Watson and Bluemix, which manage UAV data integration as part of the asset management application.

As another example, aviation companies like EasyJet and Thomas Cook Airlines are planning to deploy UAVs to inspect their aircrafts and other assets. Their strategy includes the possibility of launching a UAV every time an aircraft approaches a gate, as a means of monitoring potential damage. A more advanced deployment example is Eelume’s swimming robots, deployed for subsea inspection and light intervention.

We’re just now seeing the beginning of what’s to come with drones, robots, and UAVs. Their use will rapidly become more popular and commonplace because of their ability to decrease costs and keep human workers safe. Moreover, drones, robots, and other smart objects are becoming more versatile and useful as their functionalities and intelligence continue to be improved.

For instance, UAV vendors are working towards releasing cognitive drones which will be able to intelligently tune the rate of their data collection depending on the context of the inspection. In particular, cognitive drones will be able to collect more images about damaged parts, by adapting their operation whenever they identify a damaged part.

In the future, it’s likely that we will see inspections and maintenance tasks carried out by voice-guided robots. We’ll also see a greater number of actuator robots that will complete routine field inspections, while human workers lead the way in safer, supervisory roles (Edge 4 Industry, 2018).

< Prev   CONTENTS   Source   Next >