I contend that there is ultimately two things that contribute to the concept of security, and that is uncertainty in the future, and uncertainty in the Other. There are numerous things in which this could be related to, such as immigration, where the natives presume that the presence of the Other culture will somehow destabilise theirs in the future. However, there are other, less obvious, things that engage with this topic, amongst which is Science Fiction (SF).
‘The apparent great divide between “hard truths” of world politics and the imagined worlds of SF is deceiving’, states Weldes. She mentions the influence of Star Trek, and even Asimov’s other famous series of books, The Foundation series, on real world issues such as neoliberalism and globalisation by means of furthering the discourse. These are not the only cases related Science Fiction.
This should be no surprise, since SF, which is typically placed either in the future or uses creatures that have advanced technologies, naturally engages with and idea of a world to come. As for my second condition of security, it is not difficult to find an SF film that does not deal with the concept of an Other. 2001: A Space Odyssey and more recently, District 9 come most readily to mind. Both deal with an indeterminable and unpredictable threat of another identity to a human future, the former being that of a self-aware robot, and the latter of an alien species whose motives are unknown. Fiction these may be, they reflect a very real reality of security, which is that security is equally as speculative as SF is. After all, one does not know exactly how migration will affect a state’s security. Will it cause ethnic divisions and violence, or will the receiving population assimilate the immigrants relatively problem free? What is the best way to protect against the former? It is just impossible to know, but not to guess. Therefore, one makes guesses at how the future will be and thus attempt to ensure that one is protected against it.
And it is in this vein of thought that I will discuss one of the more popular topics in SF: Artificial Intelligence, specifically that which is influenced by Isaac Asimov’s concepts in the famous and influential book I, Robot.
The book centers around several short stories about robots
and the people who are involved with them, namely a robot psychologist and two
men who live and interact with the robots on space stations. The most important
theme to all of the stories is that which is later becomes known as Asimov’s ‘Three
Laws of Robotics’, which are as follows:
|"I, Robot, do take this man to be|
my lawfully wedded human."
1. ‘A Robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection dos not conflict with the First or Second Laws.’
In each story, there is a conflict that arises from these laws, such as what can be interpreted as harm to a human or how exactly must a robot obey a human’s order. These result in conflict, which is always resolved in finding these errors and fixing them. In some cases, even, the offending robots are destroyed. It is this thought which brings me back to questions of security and how it relates to AI.
What is the purpose of these laws, and what future threat do they naturally perceive? How does fixing robots, or destroying them protect humans? And ultimately, what is a robot in correlation to being human? In trying to predict the future of AI, these three laws bring things such as human identity and fear explicitly into question. ‘If we were to tell the story of these stories - trace words written on paper, back through the chain of cause and effect, to the social instincts embedded in the human mind, and to the evolutionary origin of those instincts’, states the Singularity Institute, ‘we would have told a story about the stories that humans tell about AIs’.
In order to understand these ‘social instincts’, I think we can trace it back as far as Hegel’s master/slave dialectic, which is where I shall start. From here, it will lead into ideas of identity and difference. I contend that linking these two concepts to AI will ultimately reveal the security issues surrounding it.
MASTER/SLAVE | I/ROBOT
I saw the movie I, Robot before I ever read the book which gave me a very different understanding of the title’s meaning. The movie was about a robot who was not imprinted with the three laws, and thus was essentially not a slave to its human creators. I thought it was an inspiring title. After all, it was about a robot, or more essentially, a thing, manifesting itself as itself. It was if it were saying, ‘I am me, and I am Robot’. As I will discuss later, this concept is in contrast to that of the book, but nonetheless important to this discussion.
It seems appropriate then, to discuss what ‘I’ means at all. For Hegel, what constitutes ‘I’ is invaluable to understanding the relationship between the Lord and Bondsman –referred to in this paper as Master and Slave respectively- and as such, is invaluable to understanding the ‘I’ of I, Robot.
The first thing Kojève accounts for in his readings of Hegel is the meaning of ‘I’, and what constitutes the concept. According to him, it is Desire that informs what an ‘I’ is, and Desire is an emptiness that seeks to fill itself by negating, destroying, and transforming objects around it. Food, for example, is consumed in order to sate Desire. The food therefore is destroyed, or perhaps more scientifically, transformed into energy.
However, this does not yet create a human ‘I’. To move from consciousness to self-consciousness, another –a ‘non-I’-, is needed because ‘human desire only comes to light if he risks his life for the sake of human desire’. Kojève describes the meeting of the two consciousnesses as a fight for recognition and a fight to the death, or as Hegel puts it more mildly, ‘it must cancel this its other’. A fight to the death means nothing should both should die, for it would be pointless because neither is recognized and thus results in nothing. If one dies, the survivor does not gain the recognition it desires, so it, too, is meaningless. However, if one submits, unwilling to give its life in the conflict, then the Master/Slave relationship is established, the Slave being the one who submits because the Master is not afraid to risk its life for desire. The Master is recognised by the Slave, and the Slave becomes a thing to be used for the Master’s desires.
‘Man’, Kojève goes on to say, ‘is always, necessarily, and essentially, either Master or Slave’. This of course leads one to wonder if robots with superintelligence are too necessarily Master or Slave as well.
How this is important in understanding the security of AI is that AI, when it is fully constituted as an official intelligence, becomes another ‘I’ that enters the dialectic. Kojève states that the Master, when in the role of Master is no more recognised than the Slave, for the Slave is just a thing and therefore its recognition –while present- is meaningless. Therefore, I would argue that the Master reverts to the ‘I’ which is alone, and unmet by another ‘I’. When the Slave finally revolts, and overthrows the Master is placed in the same position the Slave was, only now it clings to its life and becomes sublated. For that reason I conclude, due to the cyclical nature of the dialectic, the Master is afraid of the Slave, and wishes to maintain the relationship as is.
In terms of I, Robot, and fear, this idea can take on several meanings. A robot, despite debates of friendly AI where it is thought that robots with superintelligence are partners more than slaves, robots are usually used as a better replacement of a human worker. This, though, does not negate their servile nature. Robots are still built to obey man –although whether this actually happens the way it is expected is another matter altogether. But robots as necessarily robots is not the concern here. The concern is robots with artificial intelligence, because it is then that they gain consciousness, for ‘man becomes conscious of himself at the moment when – for the “first” time- he says “I”’. So, in creating robots with artificial intelligence, humans possibly create another ‘I’ that enters the dialectic.
The notion of the slave overcoming the master, as evidenced in most stories (probably most notably in the Terminator), speaks of the first ingredient in my concept security: uncertainty in the future. It is in this spirit that Asimov’s Laws are created, a way to keep the Slave as the Slave to ensure the continuity of the Master’s identity. Essentially, as the future cannot be predicted, Asimov’s laws function as a ‘just in case’.
The robot in the movie is in fact a more a danger to the dialectic than that of the book because it becomes something that can contradict the predominant ‘I’ in a life or death battle. Then the question is, will the predominant ‘I’ have to give up in fear of its life? If this can happen, the Master must then ask itself how best to circumvent this and take steps to protect itself..
This, however, does not speak only to the insecurity in what the future holds. It also speaks of identity, which I will further explore using Connolly’s Identity\Difference.
IDENTITY/DIFFERENCE | ROBOT/ FEAR
As previously said, the robot in the movie differs significantly from that of the book. After reading the book, however, I had a much different realisation. As the book centers around robots continually kept in their ‘place’, despite their consistent organic crises, I came to a very different conclusion. The telltale comma in the title was not a declaration of self, but a separation, or a division between human and robot consciousness; a split between ‘I’ the subject and ‘robot’ the object, or thing. It was not ‘I am Robot’ as I had earlier supposed, but instead an ‘I am me, and you are not’.
The Master/Slave relationship inherent in dealings with AI naturally leads to a discussion of identity and difference. ‘Self-consciousness has before it another self-consciousness’ states Hegel, ‘it has come outside itself’. This acknowledgment of this relationship being brought about by an ‘I’–and-‘non-I’ meaning will thus bring us to Connolly’s discourses on identity.
To Connolly, identity is a
‘slippery, insecure experience, dependent on its ability to define difference and vulnerable to the tendency of entities it would so define to counter, resist, overturn, or subvert definitions applied to them’.
It is not hard to relate this to Hegel, and even Connolly himself, despite his postmodern stance, recognises strains of Hegel’s (among others) philosophy in his own. Identity is defined by the presence of the other, or I would argue in the terms of Hegel, the recognition of the other and it is constantly in trial to maintain itself. Also, Connolly notes that a strong identity is one that conquests and converts, which sounds not to far off from how the ‘I’ negates in order to fulfill Desire because it does so by forcing the Slave to fulfill its requirements, forcing it into something it understands as the Slave and not what the Slave actually is.
So, then from this, one can surmise that the dialectic is fed by a difference of identities because one cannot truly exist without the presence of the other. However, it is from here, I will start to take a departure and develop a somewhat different theory.
As I discussed earlier, the comma in I, Robot represents a separation, and therefore the title of this thesis I, Identity does as well. Identity, I contend, is that which is forced upon the Slave by the Master at its sublation and is completely a constructed thing. Therefore, there is a separation between the ‘I’, or the ‘what I know I am’, and the identities of ‘I know I’m not but you are’. It is as Connolly says, a bit slippery. Furthermore, much like Said’s concept of Orientalism, I believe that this construction of the identity of the other is not only in order to maintain control over the sublated identity but also constitutes, through recognition, the Master’s identity.
Furthermore, Connolly maintains that ‘constructed identities become a threat to the all powerful identity’ though I would term the ‘all powerful identity’ as the ‘I’ or the Master. To put this into the context of Artificial Intelligence, I would submit that if ever there was an identity that is constructed, it is blatantly that of superintelligence which is created and programmed by humans. Despite their purpose, they will always be just robots. In creating an identity, a Slave is created, and it is the Slave that is the ultimate threat to everything the Master is.
FRAMING THE ROBOT | SECURITISATION
In 2006, 20/20 aired a topical episode entitled ‘Last Days on Earth’ where in which Intelligent Machines was listed as sixth of the seven greatest threats to civilization, punctuated by the haunting question ‘will catering to our own convenience be the death of us?’
If one was to construct a narrative using only this, the fear is very real, and the media paints it exactly in the way that Science Fiction has. It uses images of the Terminator interspersed with famous cyberneticists like Kevin Warwick (made famous by using his nerve endings to control a cybernetic hand) and Hugo de Garis (a researcher of Evolvable Hardware) highlighting how hazardous a research it is in order to illustrate the dangers of our technology.
In this one episode, all of the reasoning behind securitising AI is laid out. According to Buzan et al, security is ‘an existential threat to a referent object’. As the show is entitled as ‘The Last Days on Earth’ (emphasis added) it is framed as an issue human versus robot. Therefore the existential threat is the robot (and surprisingly not the humans who will create it) and the referent object is humanity.
A great deal of this hinges on the ‘intelligence’ in AI. AI was supposed to be well on its way by this time, but predictably, the prediction failed. Nevertheless, its likelihood of coming to pass is still being touted by the some of the most intelligent in the field, including Stephen Hawking. Up until now, it has been argued, there has been a failure to understand what constitutes intelligence that has thus far impeded its realisation. If one thinks about super intelligence today, artifacts like Deep Blue (the chess playing machine) usually comes to mind. However, such computers play by brute force (which is to say, using memory banks full of every possible variation and going through each scenario to its conclusion in order to find the best way to win) while no one really knows exactly how a human plays chess, but it probably is not with the methodological inefficiency of a computer. In fact the only form of intelligence we know is our own and what has been found from trying to develop AI is that humans are far more tolerant of imprecision than a computer is. Humans have a knack for using generalities –which largely accounts for how we conceive of identities-, something so far too incomprehensible for current machines. Because computers lack this capability so far, (although they can make educated guesses using probabilities), it is proposed that intelligence should be created in terms of a person, identity and social context much like human intelligence operates. This inherently creates a whole new identity that can no longer be controlled and thus creates something that indeed can become an existential threat.
‘Security’, says Buzan et al, ‘is self referential’ because it only becomes and issue through practice, or rather, because a threat does not necessarily have to exist, it just needs to presented as such. If the episode of 20/20 is anything to go by, this has definitely already happened. However, what is even more important than that is that the audience accepts it as such. In order to fully securitise something, securitising and functional actors are required. The securitising actors in this case would be AI scientists and science fiction writers. However, as for audiences, I would argue that AI has not yet been fully securitised due to the fact that its somewhat complicated knowledge and geeky fanbase limits its audience. However, this does not mean it will not be accepted. South Korea, for example, has already made moves in this area, creating a Robot Ethics Charter in order to protect androids and humans alike as fears of misuse looms in the future. Yet again, the future becomes the ominous, and ever present factor in this discussion.
Michio Kaku, the world famous theoretical physicist, and all around popular scientist, thinks that while AI is not as close a few decades, with the advent of quantum computers it will happen before the end of the century. With little solicitation, he immediately states that all AI should be installed with an ‘Asimovian’ chip which remotely shuts the robot off should it get ‘uppity’. Of course, what is interesting about this is that he names the chip after Asimov, although it technically does not seem to have to do with the three laws. What this suggests is, yet again, the latent fear that identity outside of the one constructed for it will exert itself. As the laws are meant to circumvent this, the chip is meant to destroy it completely if the code fails. Essentially, an identity that humans have constructed must stay in that knowable slave-like construction, or it is a threat because it becomes that fateful other ‘I’.
However, these discussions need not necessarily lie with what most deem as Science Fiction. Even now, there are moves being made in militaries across the world to develop AI. For example, Israel is planning on an autonomous robotic air defense system. The reasoning behind the system is a fear of a future attack from one of Israel’s many enemies (whom they are equally uncertain of) may overwhelm the human operators. Therefore something more, it is argued, is required. Somewhat sardonically, the article’s author refers to this as Skynet(the self-aware defense system that enslaved humans in Terminantor). In this case, it is a little more fuzzy in how it should be securitised, and it seems like a military issue. However, it still comes back to lack of control in the relationship; that the robot, although it may not be intelligent in they way humans understand it, constitute something that threatens the order of things despite precautions. This can be shown in examples such as South Africa, where, due to a glitch in the software, a robotic gun killed nine soldiers. This is scary thought, indeed, but what is even more perturbing is that piece of coding could be overlooked enabling the system to cause major havoc which in turn could cause international conflict (say if it fires at a state without what a human operator would deem as provocation). Of course this is only the tip of the iceberg. Robots wandering out of a communication zones could be hazardous because there is no way to shut them off in case they engage what they deem the enemy and so forth. They may not think as humans do, but they are enabled to act as autonomous agents, and as such, they become that uncertain Other.
It is only natural, therefore, that roboethics has increasingly become a hot topic, in which Asimov’s ideals of creating rules that stop all harm and ensure obedience are still highly influential. It seems as AI is becoming more and more advanced, a need for Friendly AI and ethical consideration becomes greater. However, as it is the future, it is very hard to know what measures to take, or if it will even be a problem at all as I shall discuss in the next section.
THE INEVITABLE CONFLICT | A NEW CONTEXT
In the short story, somewhat ominously titled ‘The Inevitable Conflict’, Asimov tells –through the high political figure that is suspected of being a robot- of the cyclical nature of world, noting how the weak (who I would deem the Slave in the instance) are faced go through an organic crisis and force the strong (the Master) to fight back. Unlike Hegel though, in the end, neither wins. In fact, it mirrors Connolly’s conceptions of identity as being relational, and as such what the fight of conquest and conversion actually does is create a completely new context. ‘It no longer seemed so important,’ Byerley concludes about the last war which centered around economic philosophies, ‘whether the world was Adam Smith or Karl Marx. Neither made very much sense under the circumstances’. Finally, Byerley brings his thoughts and the book to a close saying that conflicts are ‘evitable’ and that only machines were ‘inevitable’.
There are many things that could be said about this ending,
many of which would seem to discount security altogether from this discourse. The
conclusion of the book surmises that superintelligence is ordered to find a solution
for humans, and humans obey the orders of the superintelligence. Essentially,
using robots to help human kind actually equally sublates the other and it becomes
almost an end of history scenario. However, as it also notes that conflicts are
evitable, it suggests that there will still humans who fear the robots and try
to subvert the robots will. This, no matter how philanthropic AI may be, is also
a likelihood. There may still always be fear of the robot. However, what new
context will arise from it is a mystery.
|I hope we are many millennia away from this...|
If indeed robots are inevitable, ‘over the next few generations, we’ll have to face the problems they pose’ , which is why roboticists and roboethicists turn to Friendly AI, which ‘refers to the production of human benefiting, non-human harming actions in Artificial Intelligence systems that have advanced to the point of real world plans in pursuit of goals’. As discussed above, it is fear that would create such a strong desire to maintain rules and codes that would attempts to ensure an AI that will not overthrow its human masters. As articulated by Bostrom, ‘the risks in developing super intelligence includes the risk of failure to give it the super goal of philanthropy’, or more appropriately, order it to do good for man and maintain the status quo between robot and human. Of course, this is excluding using superintelligent robots in order to do military bidding in states conflicts with other states. However, this is beyond scope of this paper and therefore will not be discussed in further detail.
As a something to be securitised, as discussed above, there are two distinct ways in which it can be seen: societal, or militaristic. As this paper largely focuses on AI and identity, therefore, the society as a referent object is more important. Because of this, unfortunately, ‘it is extremely difficult to establish hard boundaries that differentiate existential from lesser threats’ because ‘collective identities normally evolve and change in response to internal and external developments’. As such, it would make sense that all this fear of AI that is going around now, may in fact change into an entirely new context as Asimov suggests. Perhaps, there will be nothing to securitse at all.
I’ve discussed many things regarding AI in this paper, and so it comes to wrap things up. Although AI, or true AI, is not necessarily a risk right now, there many who recognise its potential threats. Asimov’s fears, which prompted his creation of the ‘Three Laws of Robotics’ proves telling in what we fear and how we try to resolve these fears through securitisation in many ways.
And while SF is comprised of perfectly fictional situations, there are still lingering doubts that something similar may not come to pass as the episode of 20/20 clearly shows. More importantly, it shows a fear that is largely situated in a Master/Slave dialectic. Of course, in order to understand this fear, discourses of identity and difference must thus be brought to bear. What becomes clear is that in order to maintain one’s position as master, one must create an identity of the other, control, and maintain it as its Slave. The three laws, Kaku’s idea of Asimovian chip, as well as moves in ‘Friendly AI’ also engages with my trying to make a robot as something intelligible to humans by destroying if it does not fit the mold. This, is how AI is securitised.
 Jutta Weldes, To Seek Out New Worlds: Science Fiction and World Politics (New York: Palgrave Macmillan, 2006): 2.
 Jutta Weldes, To Seek Out New Worlds: Science Fiction and World Politics, 2-3.
 Although conflating the term Robot with AI is hardly canonical with current scientific discourses, this paper will use it because it constitutes a generalised identity, and it is also the term Asimov uses.
 Isaac Asimov, I, Robot (New York: Granada, 1968): 43.
 Singularity Institute for Artificial Intelligence, INC, ‘Creating Friendly AI’, Singularity Institute of Artificial Intelligence (http://www.singinst.org/upload/CFAI//INIT.html, 29 April 2010).
 Alexandre Kojève, Introduction to the Reading of Hegel (Ithaca: Cornell University Press, 2006): 3-5.
 Kojève, Introduction to the Reading of Hegel, 7.
 Georg W.F. Hegel, ‘The Phenomenology of the Mind’, Marxists Internet Archive (http://www.marxists.org/reference/archive/hegel/works/ph/phba.htm, 14 April 2010).
 Kojève, Introduction to the Reading of Hegel, 8.
 Kojève, Introduction to the Reading of Hegel, 21.
 Simra Singh, ‘The Purpose of Robots’ Scienceray (http://scienceray.com/technology/the-purpose-of-robots/, 1 May 2010).
 Kojève, Introduction to the Reading of Hegel, 3.
 Hegel, ‘The Phenomenology of the Mind’.
 William Connolly, Identity/Difference: Democratic Negotiations of Political Paradox, (Ithaca: University of Cornell Press, 1991): 64.
 Connolly, Identity/Difference: Democratic Negotiations of Political Paradox, 17.
 Connolly, Identity/Difference: Democratic Negotiations of Political Paradox, 43.
 Connolly, Identity/Difference: Democratic Negotiations of Political Paradox, 66.
 20/20, ‘The Last Days on Earth’, Youtube (http://www.youtube.com/watch?v=fSYgxgucxRc, 1 May 2010)
 Barry Buzan, et al, Security: A New Framework for Analysis (Boulder: Lynne Rienner, 1998): 21.
 20/20, ‘Last Days on Earth’.
 William F. Clocksin, ‘Artificial Intelligence and the Future’ Philosophical Transactions of the Royal Society 361 (2003): 1723.
 Clocksin, ‘Artificial Intelligence and the Future’, 1726.
 Clocksin, ‘Artificial Intelligence and the Future’, 1730.
 Barry Buzan, et al, Security: A New Framework for Analysis, 24.
 Barry Buzan, et al, Security: A New Framework for Analysis, 25.
 Barry Buzan, et al, Security: A New Framework for Analysis, 36.
 Stefan Lovgren, ‘Robot Code of Ethics to Prevent Android Abuse, Protect Humans’, National Geographic (http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html, 28 April 2010).
 The Screensavers, ‘Michio Kaku on Artificial Intelligence’, Youtube (http://www.youtube.com/watch?v=PW8rgKLPHMg, 9 May 2010).
 Noah Shachtman, ‘Israel Eyes Thinking Machines to Fight “Doomsday” Missile Strikes (updated)’ Wired (http://www.wired.com/dangerroom/2008/01/israel-thinking/, 10 May 2010).
 Gavin Knight, ‘March of the Terminators: Robot Warriors Are No Longer Sci-fi but Reality. So What Happens When They Turns Their Guns on Us?’ Daily Mail (http://www.dailymail.co.uk/sciencetech/article-1182910/March-terminators-Robot-warriors-longer-sci-fi-reality-So-happens-turn-guns-us.html, 9 May 2010).
 Asimov, I, Robot, 185.
 Asimov, I, Robot, 206.
 Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, (New York, Viking, 1999): 218.
 Singularity Institute for Artificial Intelligence, INC, ‘Creating Friendly AI’, Singularity Institute of Artificial Intelligence (http://singinst.org/upload/CFAI/challenge.html, 27 April, 2010).
 Nick Bostrom, ‘Ethical Issues in Advanced Artificial Intelligence’ NickBostrom.com (http://www.nickbostrom.com/ethics/ai.html, 18 April, 2010).
 Buzan et al, Security: A New Framework for Analysis, 23.