This book’s scope and results

This book’s central thesis is that AI’s essence, i.e., what distinguishes it from any other technological form, is its unconscious roots. This does not mean that AI exerts a particular impact on the psychological dimension of humans. All technologies make an impact because they interact with the human environment and modify it in different ways. Instead, I argue that AI not only exerts a psychological impact but, above all, has psychological roots. As I said above, the central concept of my research is that of projective identification, which will be analyzed from the perspective of different therapists (Klein, Bion, Ogden, and Winnicott). Based on this concept, I speak of “emotional programming,” which is this book’s original thesis. The expression may shock many people (especially engineers, I imagine), but I believe that it is justified. One of Freud’s fundamental teachings is that our emotions and fantasies are rooted in the past and define the backdrop of our present experiences. In any experience, we project emotional and imaginary content built from our past experiences. This also applies to technology and AI. As Turkle (1984, 34) says, “When you program it, it becomes your creature.” To use Bion’s metaphor, emotions and fantasies comprise the “theater” in which all other aspects—thoughts, desires, will, life and work projects, technologies—are called on “to play a role.” This book aims to apply this metaphor to AI. A special type of unconscious projection takes place in AI that deeply characterizes the digital world’s identity, social and otherwise. AI is not an “out-there” phenomenon. On the contrary, “the digital is both around us and inside us,” and this implies that “robotics and AI become raw materials for the production and performance of the self” (Elliott 2018, 22). The construction of self passes through software and AI, and this process leaves deep traces in these technologies too.

This book is divided into five chapters. In Chapter 1,1 define my approach to psychoanalysis and AI. The reader will not find a discussion about machine “consciousness.” My approach is phenomenological and behavioral in nature. I do not presuppose rigid definitions of intelligence or consciousness as being applicable to humans and machines. AI will always remain a simulation of human abilities and attributes. In the literature, this is called the “machine behavior approach”.

In Chapter 2,1 pose two questions: Is psychoanalysis of artifacts possible? Does technique play a role in the formation of the unconscious? In doing so, I propose a reinterpretation of some fundamental Lacanian concepts through Latour’s actor-network theory. I argue that this reinterpretation provides solid arguments in favor of applying psychoanalysis to AI.

In Chapter 3, I delve into questions regarding the algorithmic14 unconscious, i.e., AI’s unconscious roots. My thesis is that the human need for intelligent machines is rooted in the unconscious mechanism of projective identification, i.e., a form of emotional and imaginary exchange. Projective identification is an unconscious process of the imagination in which the ego projects parts of itself (qualities, abilities, or body parts) onto another person. This process is a form of unconscious communication: the projecting ego asks the person who receives the projected content to accept and contain it. I apply this dynamic to the sphere of artifacts. I do not wish to develop a “theory of affects.” Instead, 1 argue that unconscious projective identification is a useful concept for analyzing and better understanding AI systems’ behavior. Therefore, analyzing the projective identification processes that take place among groups of programmers and designers can be an important tool for understanding why AI systems behave in one way and not another. 1 assert that projective identification is a form of emotional programming that precedes all other forms of programming in AI. I see this as an original point that opens up new research possibilities in relation to the study of AI’s emotional and affective dimensions—the emotional programming has to be thought not as an effect but as a condition of the technical and engineering fact. I clarify this point in Section 3.4, in which I explain what this emotional programming is and how it works in AI.

In Chapter 4, I analyze four concrete phenomena (errors, noise information, algorithmic bias, and AI sleeping) that I consider to be some of the most relevant expressions of the unconscious algorithmic, namely, the ways by which the work of projective identification appears, or “comes out,” in AI. This serves the purpose of introducing the main result of the book: the topic of the algorithmic unconscious that is a theoretical model to study AI system’s behavior. In the appendix of Chapter 4,1 further expand my investigation by distinguishing another, even deeper, sense of the algorithmic unconscious, that is, the set of large software systems: a collection of billions of lines of code produced by thousands of programmers. These billions of lines of code that manage every activity in our lives exceed individual minds and consciousness.

In Chapter 5, I advance a new line of research. I claim that neuropsychoanalysis and the affective neurosciences can provide a new paradigm for AI research. So far, research in AI has always focused on the activities of the cerebral cortex (language, logic, memory, cognition, etc.). Now, the time has come to conceptualize and develop an AI of the subcortical. My hypothesis is that an artificial general intelligence (AGI) inspired by neuropsychoanalysis and affective neuroscience must be based on the simulation of the basic affective states of the human being. Here, I mainly refer to the work of Solms and Panksepp.

The algorithmic unconscious hypothesis explains the originality and complexity of AI compared to any other technological form. Starting from this hypothesis, I construct a topic that can be a useful theoretical model for analyzing AI systems, their behavior, and especially biases. Most noteworthy is the fact that this hypothesis can provide us with a new perspective on AI and

The main topics of the book, and their relations

Figure 1.2 The main topics of the book, and their relations.

Projective identification is an unconscious process of imagination and emotion. I claim that projective identification is a form of emotional programming that precedes all other forms of programming in Al. I analyze four concrete phenomena (errors, noise information, algorithmic bias, andAI sleeping) that I consider to be some of the most relevant expressions of the unconscious algorithmic.This serves to introduce the main result of the book: the topic of the algorithmic unconscious that is a theoretical model to study Al systems behavior.

intelligence. If we desire to create truly intelligent machines, we must create machines that are also “emotional,” i.e., machines with the ability to welcome and deal with our emotions while possessing their own emotions as well. We cannot keep emotions, technology, and logic separate. Finally, it is not my goal to provide definitive answers. My intention in writing this book (for an overview, see Figure 1.2) is to ask questions and open lines of inquiry; I do not claim to have defined a doctrine within these pages.

 
Source
< Prev   CONTENTS   Source   Next >