Reverse Engineering the Mind
Many organisations are now developing robots and mobile phones that ‘understand’ their owners, and ‘machine consciousness’ programmes which will have commercial applications and medical benefits. Professor Igor Aleksander of Imperial College London has been developing artificial intelligence for 40 years, and here he guides us through the challenges of replicating a conscious mind.
The concept of artificial intelligence (AI) has been around for more than 50 years. In that time there have been many claims for AI, but unfortunately, many were ahead of any actual achievements. Perhaps a part of the problem was the difficulty computer researchers had in understanding just what was meant by ‘intelligence’. Only recently have computer researchers focused on what it is to have a ‘mind’ rather than what it is for a machine to behave intelligently.
Researchers are pursuing various design methods and philosophies in attempts to replicate the relationship of mind to body and to reproduce the mechanisms that make an organism conscious. This form of reverse engineering aims to provide operational models of the mind in medicine as well as provide ‘minds’ for a wide variety of machines – from planetary robots and mobile phones that understand their owners to conscious navigation systems.
Machines with a sense of self and an ability to achieve daily tasks autonomously are largely beyond the reach of the AI programmer, who has to foresee too many contingencies. In this millennium, though, researchers are designing machines that try to get to the heart of what it is to be conscious. My laboratory at Imperial College and others are trying to create machines with a sense of an autonomous self. This might turn out to be a mistake, but this departure from classical AI has already proved to be instructive and carries the promise of overcoming some of the previous limitations of AI.
Defining the mind
Our first challenge is one of language. Matters of mind and brain are notorious for containing words that lack definition. Here is a rather personal interpretation of some of these mind words:
-
Being conscious is being in a state that supports thought.
-
Thought falls into at least two major categories: perceptual and imaginative
Firstly, being conscious is a product of the brain. Living brains do not always support thought: sleep and anaesthesia are examples of where the brain is fully active but not thinking. Dreaming, of course, is a strange type of thinking that occurs during sleep. Thought also involves ‘emotion’, which is not a failing but an aid to making choices, and ‘mind’ therefore refers to all of which one is capable of thinking.
Secondly, perceptual thought is the immediate sensation of appearing to be in the middle of an out-there world. Imaginative thought can be in the past or the future, remembering who one is and where one has been, then deciding what to do next and the probable consequences of this. Imaginative thought can contain things never experienced, and it can be evoked by language, as in storytelling, or just by arbitrary emergence.
Reverse engineering
Engineers usually use the term ‘reverse engineering’ to describe the process of taking something apart to see how it works and trying to emulate and improve its function. In computation, we use the term reverse engineering in cases where neurological and anatomical data of living mechanisms form the basis for the design of a machine. In reverse engineering the mind, the task assumes a level of apparent surrealism as the initial data is not neurological nor anatomical, but introspective. I, as the reverse engineer, have nothing but my sensation of being conscious on which to base design decisions.
Designers approach reverse engineering in different ways. What unites them is the desire to clarify what it means for a mechanism to be conscious. What differentiates them is whether they are only concerned with purposeful behaviour that appears to be the result of a mental state (the functional stance) or whether they ask how machines like the brain are capable of generating a detailed mental state that corresponds to the sensations we experience internally (the material stance).
Technologically, the functional work relates more closely to conventional computation and AI styles of creating rules to achieve a desired outward behaviour by whatever computation is necessary, without reference to how the brain does it. The material designs are closer to neural network models of the brain where dense networks can have the detailed representational abilities. Then, network dynamics and their emergent properties – stability, reconstruction, sensory knowledge representation – become important parameters. Some models fall between the two extremes, drawing on useful aspects of each method.
There is considerable shared hope that this effort will yield improved machinery, achieving an as yet unattained performance. For example, it might be possible to design exploratory robots which understand their mission and are aware of their environment and their own selves within it. They currently rely heavily on a programmer who anticipates a large number of contingencies or human intervention from Earth. Other applications of this approach are systems that go beyond smart behaviour, requiring robots with understanding and sensitivity of the nature of their environment or the needs of users.
A ‘sensitive’ machine
A good example of a machine that requires understanding and sensitivity is the Intelligent Distribution Agent (IDA) designed by Stan Franklin of Memphis University. Dr Franklin’s IDA was designed to replace human operators in the task of billeting seamen. The communication link between a seaman seeking a new billet and IDA is email. The IDA receives information about current postings, the seaman’s skills and desires for a new location. It then attempts to match this to the current state of available billets, perhaps having several cycles of interaction to achieve a result. The key feature is that seamen using the system should not feel that there has been a change from the human billeters to a machine, in terms of the sensitivity and concern with which their case is handled.
But why should such a system be said to have a mind rather than being the product of crafty programming? There are two reasons. First it replaces a ‘caring’ human being and ‘caring’ requires the conscious understanding of one’s needs. The second reason has to do with the design process. Franklin, in wondering how to implement the conscious caring organism, used a well established psychological model of consciousness due to Dr Bernard Baars of the Neurosciences Institute in San Diego. The focus of this model is a competitive arrangement where many partial thoughts that come from a variety of memory mechanisms compete (see Figure 1). The winner enters a consciousness area. The content of this is broadcast to address the memories afresh, generating a new set of ‘thoughtlets’ for competition. The sequence of the states of the consciousness area represents a developing thought.
The problem of phenomenology
The IDA is a clear example of a functional model for the simple reason that the conscious area of the model contains encoded forms that ‘represent’ perceptions and memories of the world. For example, ‘Florida’ might be represented just by the symbol ‘FL’. Dr Franklin describes this as a system with no ‘phenomenology,’ meaning that if the machine suggests Florida as a posting it cannot at the same time develop a description of what it is like to be in Florida as a conscious state, unless this information is specially programmed in. Meanwhile, at the Nokia Research Laboratories, Dr Pentti Haikonen has developed a detailed architecture that relies heavily on the ability of recursive neural networks (networks with feedback) to store and retrieve states. Based very roughly on the operation of a brain cell, an artificial neuron is a device which receives input signals and ‘learns’ to output an appropriate response.
Recursive networks have stable states by virtue of the fact that neurons not only receive signals from external sources such as vision or audition, but also from signals generated by other neurons in the same network. So, say that a network has learned to represent the image of a cat: this image can be sustained as each neuron will output its feature (ear, tail, whiskers) of the cat image in response to other neurons outputting cat features. Together, these features represent a cat, but not quite in a phenomenological way – more as a coded collection of appropriate features. This means that such a network can store knowledge states and if the net is given only some of the features of an object ‘it knows’ it will recall the complete set.
My own approach has sought to address the phenomenology problem head-on and to identify mechanisms which, through the action of neurons, real or simulated, can represent the world with ‘depictive’ accuracy. This is intended to concord with the sensations that are felt introspectively when we report a sensation. The model of being conscious stems from five features of consciousness, which appear important through introspection. Dubbed ‘axioms’ – because they are intuited but not proven – these are:
-
Perception of oneself in an ‘out-there’ world
-
Imagination of past events and fiction
-
Inner and outer attention
-
Volition and planning
-
Emotion
This is not an exhaustive list, but is felt to be necessary for a modelling study. In the belief that consciousness is the name given to a composition of the above sensations, the methodology seeks a variety of mechanistic models, each of which can support a depiction of at least one of these basic sensations.
Perception necessitates a neural network that can register accurately, that is, can depict, the content of a current perceptual sensation. ‘Out-there-ness’, particularly in vision, is ensured through the mediation of the muscles – eye movement, convergence, head movement and body movement – which all create signals that integrate with sensory signals to produce depictions of being an entity in an out-there world. Indexing cells in this way is called ‘locking’.
Imagination requires classical mechanisms of recursion in neural networks. That is, memory of an experienced state creates an absorbed set of states in a neural net or set of neural modules with feedback. This is experienced as a less accurate version of the original because the depictive power of recursive networks weakens as the network learns a significant number of states.
Mechanisms that lead to sensations of volition, planning and emotions have been shown to emerge from the interaction of neural modules that are involved in imagination, in which state sequences constitute ‘what if’ plans, and particular modules that non-depictively (i.e. unconsciously) evaluate emotions associated with predicted outcomes of planned events. The engineering upshot of this approach is that it is possible to envisage a ‘kernel’ architecture that illustrates the meshing together of the mechanistic support of the five axioms (see Figure 2).
This structure has been used in a variety of applications ranging from the assessment of distortions of visual consciousness in sufferers of Parkinson’s Disease to identify the possibility of a brain-wide spread of the neural correlates of ‘self’, to models of visual awareness that explain inattention and change-blindness, and a possible mechanism for volition as mediated by emotion.
a.With eyes closed, the imagination module is dominant, imagining an arbitrary image
b.With eyes open, perception dominates what is seen reconstructed in the awareness area
Depictive model of the visual system: note that the system monitors where the major activity is taking place and comments on whether it is aware of seeing or imagining (where each point is the firing of a neuron)
Figure 3 is a screen shot of a simulation of a system that first imagines a face chosen at random and then switches to looking and recognising another, showing awareness of what the entire model is doing (where each point is the firing of a neuron).
Investment outcomes
The US Navy and FedEx, the shipping and supply chain management company, currently fund the IDA programme, and Nokia Research Laboratories is investing in a machine consciousness programme. In the UK, Professor Owen Holland of Essex University and Professor Tomasz Troscianko of Bristol University have an EPSRC grant of £500,000 to investigate the design of conscious robots. Many telecommunication and microelectronics companies also regularly attend and contribute to the popular workshops on machine consciousness.
So what are the likely outcomes of this investment? Clearly much of the research is fundamental. We desperately need models of human minds in the treatment of mental deficits. Medicine can call on excellent engineering models of the body side of the mind-body divide, but extending this to the mind is a much-desired outcome. The interest in telecommunication and robotics comes from the need to create entities that understand and collaborate with human users in new ways. Replacing astronauts on dangerous planetary missions, the autopilot that supports the consciousness of the pilot and other conductor systems with minds may become distinct possibilities. If a robot should be your chauffeur in the future, would you rather it were conscious or not?
Biography – Professor Igor Aleksander
Igor Aleksander FREng is Emeritus Professor and Leverhulme Fellow in Neural Systems Engineering in the Department of Electrical and Electronic Engineering at Imperial College, London. He has researched artificial intelligence and neural networks for 40 years and since 1990 has concentrated on modelling the brain to reveal the mechanisms of inner sensations. In 2000, he was awarded the Outstanding Achievement Medal for Informatics by the Institution of Electrical Engineers (IEE).
Figure 1 The Baars Global Workspace
Model as used in an Intelligent Distribution Agent: The external input – as decoded into concepts – addresses the various memories that compete for attention. The winning notion (emotionally strongest) enters tentatively into consciousness and is broadcast further to address the memories. This iterative process continues until an emotionally strong and stable concept enters consciousness at which point its emotional value exceeds a threshold and utters a suggestion. The sequence of concepts that occurs in the conscious area is the process of thinking.
Diagram by Simon Roulestone
Figure 2 A minimal architecture with axiomatic/depictive properties. The perceptual module directly depicts sensory input and can be influenced by bodily input such as pain and hunger. The memory module implements nonperceptual thought for planning and recall of experience. The memory and perceptual modules overlap in awareness as they are both locked to either current or remembered world events. The emotion module evaluates the ‘thoughts’ in the memory module and the action module causes the best plan to reach the actions of the organism.
Diagram by Simon Roulestone
Figure 3 Depictive model of the visual system: note that the system monitors where the major activity is taking place and comments on whether it is aware of seeing or imagining (where each point is the firing of a neuron)
Further reference
The World in My Mind, My Mind in the World:
Key Mechanisms of Consciousness in People, Animals and Machines, Professor Igor Aleksander (Academic Press 2005)
Keep up-to-date with Ingenia for free
SubscribeOther content from Ingenia
Quick read
- Environment & sustainability
- Opinion
A young engineer’s perspective on the good, the bad and the ugly of COP27
- Environment & sustainability
- Issue 95
How do we pay for net zero technologies?
Quick read
- Transport
- Mechanical
- How I got here
Electrifying trains and STEMAZING outreach
- Civil & structural
- Environment & sustainability
- Issue 95