Lovers, Rebels, and Blade Runners An ANT Model for the Post-Human World

Mary Reisel

'Am I alive or dead?'

'We don’t have to think like that anymore, we’re together now. ’

Solaris [1]

SUMMARY: 1. Freedom Fighters, Criminals and In-between: Human Identity in Flux. - 2. On Real, Artificial, and Biosynthetic Pleasures. - 3. ANT: Can There Be a Real Paradigm Shift? - 4. Babies of Love, Humans in Vitro: The World of Blade Runner. - 5. Love and Terrorism: The Illusion of Romance. - 6. Conclusions: There Cannot Be a Clockwork Orange Solution!

Freedom Fighters, Criminals and In-between: Human Identity in Flux

Was Prometheus a dangerous criminal, a heroic freedom fighter, or a wild rebel for a cause? From the beginning of human history, the progress of civilization has been embedded in a compulsion for knowledge followed by a severe punishment once it was obtained. Ever since the mythological tale of the theft of fire, one of the few myths that recur in most world mythologies, the evolution of humanity and the advance of civilization have been subjected to judgment and retribution, and knowledge was bound together with warnings and threats that aimed to stop human progress. From Adam’s seduction to eat the forbidden apple through Milton’s Paradise Lost and up to Goethe’s Faust, legendary heroes seeking knowledge and freedom have met sorrows, suffering, and pain. The passion to learn — Adam’s original sin — is seen as the devil’s work rather than God’s way of leading to progress and empowerment. Throughout human history, knowledge has been supervised and controlled by hegemonic authorities for purposes of social domination control and not in connection with fears of eternal damnation even though the images of the devil were a useful tool to maintain the status quo through potential superhuman punishments imposed from above.

The current wave of fear over the fast development of super-intelligent robots, and their threat to put an end to humanity as we know it is only the most recent manifestation of this perennial theme of knowledge and punishment.[2] This time, new and presumably intelligent artificial machines are not here to control only fire — as the harnessing of electricity did in the nineteenth century — but to take over our souls and minds: the source of knowledge, the root of what makes us human, and the core of our identities. The growing panic in the media is in contrast to the facts that show intelligence — which is a form of consciousness and self-awareness — is not something that can be easily copied in a machine even if it is advanced and can make logical decisions independently. Consciousness is related to a subjective experience, a defined personality, and the changing states of mind of an individual, and they are influenced by a unique psychology and unconscious processes, while the AC (artificial consciousness) that exists today can fulfill logical targeted tasks that are not connected to consciousness. The potential of Al (artificial intelligence) to develop consciousness and self-awareness is highly debatable philosophically and technically and is connected to many disciplinary fields and studies, such as the importance of material embodiment, the replication of imagination, the emotional impact of memory and trauma, and so forth — all are still hypothetical (Padney, 2018). Yet, the seemingly intelligent machines have joined the line of fear perpetuated by traditional myths and Frankenstein characters. Thus, the surge of smart and independent Al is not regarded as leading to a bright future of opportunities but as a dead-end road leading to the singularity: the doomsday date when machines will surpass human intelligence. Once again humanity is facing the devil’s seduction with its dangers and threats of extermination.

The latest wave of Al is radically different from previous ones in the pace of its development and the levels of disruption it is causing in the forms of employment, business, socio-political organization, and private life. This hasn’t happened before, though the current state of affairs has been gestating for a long time. The first wave of Al, in the 1950s, brought us the idea of an artificial brain and the first computers; the machines at the time consisted of both analog and digital circuits but could carry out mainly basic mathematical functions. But what lay along the road was already coming into view: Alan Turing in his famous 1950 paper on ‘Computing Machinery and Intelligence’ proposed the Turing Test as the symbolic line where machine intelligence and human intelligence will become indistinguishable. The Turing Test still remains the critical turning point in the future of human-machine relations and there is an ongoing debate when — or if — humanity will reach this stage. The symbolic implication of crossing the Turing Test lies essentially in humanity’s ability to distinguish humans from machines in certain contexts, and what the loss of this distinction might mean. This will be the turning point of the singularity. Herbert Simon, who is considered to be the founding father of Al, was certain that machines would be able to do all human work and perform complicated tasks that required intellectual decision-making processes. He anticipated the development of Al in accuracy, developed the foundations of research on decision-making processes, and even went so far as to reverse the process and explain human intuition through machine intelligence and computer programming by showing intuition is a pattern recognition based on math and not a mystical emotion (Pomerol & Adam, 2006; Franz, 2003). Many of his basic theories and key assumptions made in the 1960s have come true in our age and are still used in research and development. However, computers in that period were still at a stage where they couldn’t memorize commands and save them for long periods of time, and the cost of running a machine for a month was extremely expensive.

In the mid-sixties, scientists at Massachusetts Institute of Technology (‘MIT’) developed ELIZA, a natural language processing program that was able to engage in simple conversations by using basic vocabulary and minimal rules of syntax. The program was based on vocabulary and sentences taken from the set rules of turntaking conversations in psychotherapy and seemed to be able to pose the right questions and answers in its turn. Initially, this program appeared to pass the Turing Test, but actually it was limited to settings that mimicked a psychotherapy session in which a therapist [ELIZA] mirrored an interlocutor’s utterances in highly standardized ways. That was basically the situation until the mid-seventies, when computer technology penetrated the domains of work and daily life much more slowly than techno-optimistics like Simon Herbert, Irving John Good, and other authors of the generation of the first Al wave had predicted.[3] Independent self-sufficient Al was still just an idea and robots were industrial machines that mimicked human bodily movements like lifting and positioning only with greater accuracy, strength, and repeatability than humans. These devices labored side by side in collaboration with humans, sometimes in competition with them as was seen in factories’ assembly lines. In retrospect, these were the first bodies that would later evolve to be the container of the artificial mind. But the fear of the mechanical iron body was already developing and not only in the economy and workspaces.

The second wave of Al progress in the 1980s was characterized by several important breakthroughs: the first simple neural networks, new types of memory, and the expansion of computing power in smaller sizes and smaller packages. Later, in the 1990s, this stage of development gave rise to new devices that could handle much more complex algorithms. But in the 1980s, investments and expectations were very high and the development of Al too slow and unsatisfactory, consequently research funding dried up and the second wave of Al closed down before the end of the decade.

The current boom of Al was sparked in the 1990s by a number of innovative technologies, such as advanced neural networks and evolutionary algorithms, that changed the industry very quickly. New tools and new forms of memory opened the door to smaller, cheaper, and more accessible computers that could be put in command of crucial decision-making tasks like administering medical treatments and piloting airliners. Phenomena that only a few decades previously had been in the realm of science fiction became daily reality raising moral and ethical issues regarding the future of employment and work in the post-human age, the meaning of embodiment in a virtual existence, and the possibility of creating babies artificially through genetic engineering, which would inevitably lead to the collapse of the nuclear family and to new forms of social organization. This chain reaction of changes will also lead to a new formation of political institutions and legal definitions of human and artificial which are already in flux.

However, this state of flux provides an opportunity to reexamine basic values that were taken for granted, and to envision new socio-political structures that will no doubt be different from the ones we have been used to. As technology advances and Al tools penetrate our private lives, public spaces, and business environment, definitions and values that used to be segregated are becoming connected to each other in complicated threads of cause and effect. Each one of the threads we hold will inevitably lead to others. I propose that the first thread to begin with will be the key question that stands at the heart of the controversial dilemma facing humanity’s relationship with intelligent Al: the distinction between those who have the right to be defined as “humans” and those who don’t. The right to be human grants a multitude of additional benefits which include human rights, citizenship, recourse to a system of justice, religious acceptance, the ability to marry and have a family, and the right to live and die naturally and freely within a social community. However, bearing in mind that we are already producing artificial body parts, brain chips, and genetically designed embryos in vitro, where can we draw the line between the real human and the non-real one? A person with an artificial heart is human but what will happen once it’s possible to have an artificial brain, or to transfer the contents of our brains into an external artificial device? Should we draw lines on the basis of specific prosthetic enhancements, or on patterns of behavior? On decision-making processes or on memories and expressions of desires and emotions? How will we discriminate right from wrong, good from evil, punishment from reward, or even life from nonlife once the lines between humans and a variety of Others are thoroughly blurred?

The boundaries between right and wrong, between terrorists and freedom fighters, as the Prometheus myth demonstrates, have always been in the eyes of the beholder and therefore a challenge to try to determine objectively. It is impossible to reach a unified definition of terrorism and there are endless debates about the concept itself since acts of terrorism can result from ideologies that sometimes ‘can be justified’ (Teichman, 1989: 505). It is considered one of the most allusive actions in human society that ‘is easy to recognize but difficult to define’ (Prabha, 2008: 125).

Therefore all back-and-white distinctions are established by politico-legal institutions, and depend on the authority that defines them for the sake of specific social groups which are deemed to deserve higher protection than others. And in virtual society and spaces, on what grounds will the Others, those outside of the privileged groups, be excluded or included? Will traditional distinguishing categories like race, gender, age, etc. cease to exist, or is there going to be a new category added in order to define the entity called ‘robot’?

  • [1] Solaris, 2002, final scene. Director: Steven Soderbergh.
  • [2] In another article, I present the sources of the three major monotheistic religions - the Bible, the New Testament, and the Quran - that forbid the creation of images in the shape of God, and therefore associate the coming of the robot and self-conscious Al as a sin against God that will be paid for with severe retributions not only for the sinner but also the entire community (Reisel, 2019). 2 Elon Musk, Nick Bostrom, James Barrat, and the late Stephen Hawking are only a few among the known dark prophets of Al who warn of the potential end of humanity that the singularity can cause. James Barrat even called his book on the history of Al, Our Last Invention.
  • [3] 1.J. Good was a mathematician who worked as a cryptologist together with Alan Turing. He became one of the most important advisors of Al and supercomputers and even served as a consultant for Stanley Kubrick’s film 2001: Space Odyssey.
< Prev   CONTENTS   Source   Next >