Semantics-Based Inputs

Making sense of human language has been a goal of AI researchers since the 1950s. This field, called NLP, includes applications such as speech recognition, text analysis, translation, and other goals related to language. There are two basic approaches to it: statistical vs. semantic NLP. I’ve already discussed statistical NLP. Semantic NLP is based on the structure and meaning of language. It is often difficult to develop, requiring the creation of “knowledge graphs” that illustrate the relationships between different words and terms. However, it is often employed in language applications requiring the determination of intent and back-and-forth conversation.

In healthcare, the dominant applications of NLP involve the creation, understanding, and classification of clinical documentation and published research. NLP systems can analyze unstructured clinical notes on patients, prepare reports (e.g., on radiology examinations), transcribe patient interactions, and conduct conversational AI. For example, IpSoft’s Amelia, an intelligent agent system employing semantic AI, is used by healthcare providers to make appointments, for patient screening, and for other patient communications tasks.

Logic-Based Inputs

Logic-based AI, typically expressed in the form of rules, has a long history. Rules were the basis of the first healthcare AI systems in the 1970s. Expert systems based on collections of rules were the dominant technology for AI in the 1980s, and were widely used commercially in that and later periods. In healthcare, they have been widely employed for “clinical decision support” purposes over the last couple of decades, and are still in broad use today.5 Many electronic health record (EHR) providers furnish a set of rules with their systems today.

Expert systems require human experts and knowledge engineers to construct a series of rules in a particular knowledge domain. They work well up to a certain level of complexity. However, when the number of rules is large (usually, over several thousands) and the rules begin to conflict with each other, they tend to break down. And if the knowledge domain changes, changing the rules can be difficult and time consuming. They are slowly being replaced in healthcare by more precise approaches based on data and machine learning algorithms. However, logic has a distinct advantage over the other two types of inputs I have described: it is easily interpretable by humans. Some argue that interpretability is sufficiently important to the development of AI that rule-based systems—perhaps, in combination with other types of AI—could make a comeback in the future.8

Surgical robots, initially approved in the United States in 2000, rely in part on programming logic to operate. They provide “superpowers” to surgeons, improving their ability to see, create precise and minimally invasive incisions, stitch wounds, and so forth.9 Instigation of actions and all-important decisions are still made by human surgeons, however. Common surgical procedures using robotic surgery include gynecologic surgery, prostate surgery, and head and neck surgery.

Robotic process automation (RPA), which uses logical rules to make decisions, performs structured digital tasks for administrative purposes—i.e., those involving information systems—as if they were a human user. Compared to other forms of AI, they are inexpensive, easy to program, and transparent in their actions. RPA doesn’t really involve robots, just computer programs on servers. It relies on a combination of business rules, workflow, and “presentation layer” integration with information systems to act like a semi-intelligent user of the systems. In healthcare, they are used for repetitive tasks such as obtaining prior authorization, updating patient records, or billing. When combined with other technologies such as image recognition, they can be used to extract data from, for example, faxed images in order to input it into transactional systems.10

I’ve described these technologies as based on individual types of inputs, but increasingly they are being combined and integrated. Robots are getting machine learning-based AI “brains,” and deep learning-based image recognition is being integrated with rule-based RPA. Perhaps, in the future, these technologies will be so intermingled that composite solutions will be the norm.

In the next section of the chapter, I’ll describe three major areas of healthcare where AI is being applied, in order of how widely known the applications are. I’ll discuss diagnosis and treatment applications of AI, administrative applications, and applications to improve patient engagement and adherence.

 
Source
< Prev   CONTENTS   Source   Next >