From Artificial Intelligence to Empowered Human Intelligence

Almost seven decades ago, computer pioneering Alan Turing suggested that digital computers, programmed with rules and facts about the world, might exhibit intelligent behaviour (Dreyfus, 1992). The field later called ‘artificial intelligence’ was about to be born. Many attempts, promises, failures, and successes emerged after that (Bostrom, 2017/2014). Bostrom (2017/2014), in his influential book on machine intelligence, defines ‘superintelligence’ as ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’ (p. 26). He clarifies with the example of the chess program Deep Fritz. For him this program is not a superintelligence, because it is only smart within the narrow domain of chess. Therefore, for a machine to have general intelligence it needs to have the capacity to learn, to deal efficiently with uncertainty by means of calculating probabilities, to extract useful concepts from ‘sensory input’ and from its own internal states, and to leverage ‘acquired concepts into flexible combinatorial representations for use in logical and intuitive reasoning’ (p. 27). Interestingly, he emphasizes ‘machines are currently far inferior to humans in general intelligence’ (Bostrom, 2017/2014, p. 26).

In fact, a key part of his book is about speculating what are the paths to achieve superintelligence in the future.

Conceptual Problems with the Term ‘Artificial Intelligence’

Dreyfus (1992, 2007) is less optimistic that ‘it is just a matter of time to develop artificial intelligence’. For Dreyfus, the core assumption of artificial intelligence that human beings produce intelligence using facts and rules seems to have compromised the entire artificial intelligence (AI) research programme. The main problem is that of representing significance and relevance, based on the assumption that the mind assigns value to a world conceived as a set of meaningless facts. The key difficulty is that attributing functions to cold facts could not capture the meaningful organization of the everyday world. Beyond the difficulty of storing myriads of facts about the world, the main problem is knowing which facts are relevant in a given situation (see Dreyfus, 1992): when something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated? One answer offered by AI researchers to this problem was to use a frame, or a structure of essential features and default assignments. But a system of frames does not belong to the analysed situation, so in order to identify the possibly relevant facts in the current situation one would need a frame for recognizing that situation, and so forth. Therefore, there is an infinite regress of frames for recognizing relevant frames for recognizing relevant facts.

The early ‘Good Old-Fashioned Artificial Intelligence’ systems developed by Herbert Simon and colleagues, based on the physical symbol system approach, got trapped in this infinite regress (Bostrom, 2017/2014; Dreyfus, 1992). Other recent approaches such as neural network modelling tried to avoid it, by means of giving sufficient examples of inputs associated with one particular output, to associate further new inputs with the ‘same’ output. But there is still the problem of what qualifies as ‘same’ needs to be defined by the programmer. S(he) has determined, by means of the architecture of the net, that certain possible generalizations will never be found. In daily life, a large part of human intelligence consists in generalizing in ways that are appropriate to a context. If the programmer restricts the net to a predefined type of correct responses, the net will be exhibiting the intelligence built into it by the programmer for that context but will not have the intelligence that would enable it to adapt to other contexts.

 
Source
< Prev   CONTENTS   Source   Next >