At this point, is the question really whether or not a machine can trick us into believing it is human or is it whether a human being can convince us that it is indeed human?
You might recognise his name but struggle to remember who he is, maybe you have heard his name in lectures, or the only thing you know about him is that he looks something similar to Benedict Cumberbatch. In short, Alan Mathison Turing is the father of Computer Science. His contributions highly influenced the field of theoretical computer science, where he defined the concept of an algorithm and a computer formally as we know them today, while saving millions of lives by deciphering the undecipherable Enigma which shortened the Second World War by at least a few years.
Can machines think?
The list of Turing’s achievements goes on and on and exploring every single one would not fit in the 32 pages of this magazine. However, there is one particular paper that makes him the focus of this article: “Computing Machinery and Intelligence”. In 1950, Turing asked a very fundamental question: “Can machines think?” To an untrained or inattentive mind, this question might have seemed trivial at that time, when computers were nothing but loud calculators larger than humans. The answer would likely be “no”. On the other hand, if you ask this question today, the answer seems equally straightforward, however, this time, it is an obvious “yes”.
But before we delve into the discussion of whether computers can imitate reality and question our understanding of reality today, let’s have a look at what Turing considered as a certificate of thinking for machines.
The Imitation Game
The Imitation Game, also known as the Turing test, is a test designed to answer this question. In his paper Turing describes this test as follows:
“The new form of the problem can be described in terms of a game which we call the “imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B. […] We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?””
Turing also establishes some ground rules for this setting. For example, participants do not respond verbally. Their answers must be given to the interrogator typewritten on a piece of paper, who remains in a separate room.
In light of this description, do you think machines can think? Turing himself made a prediction. He estimated back in 1950, that by the year 2000 computers would be sufficiently powerful to trick a human after five minutes of conversation so that the interrogator would “not have more than 70 per cent chance” to correctly guess whether the participant was a human or a machine.
Have you met Eugene Goostman?
He is a 13-year-old Ukrainian boy. Or at least that’s what it wants you to believe. In reality, Eugene Goostman is a chatbot. While there have been numerous chatbots throughout history, Goostman stands out because he is said to be the first program to pass the Turing test.
On the one hand, this seems like a remarkable accomplishment. On the other hand, the alleged success has drawn significant criticism. On 7 June 2014, it successfully deceived 33 per cent of the juries at a contest where chatbots competed against each other with regard to their “score” or “success” in Turing test, which led to it being promoted by media as the first ever program that passed the test. The main problem with this statement, however, is that convincing only 33 per cent of the juries is not equivalent to passing the test. It simply agrees with Turing’s prediction from 1950, that the average human will have at most 70 per cent chance as interrogator to identify the machine correctly. Furthermore, it is also arguable whether the Turing test was intended to be executed in practice, or was merely constructed as a thought experiment. Alan Turing does not specify many details of the Imitation Game in his paper, for example the types of questions one is allowed to ask as an interrogator, adding to the idea that the test was primarily a thought experiment.
Eugene Goostman may not be the perfect program capable of successfully passing the Turing test. Nevertheless, we can claim that the beginning of a new age of computing coincides with the same time period as the development of these modern chatbots: the birth of artificial intelligence.
Are machines intelligent?
The transition from asking “Can machines think?” to impertinently claiming that we can build intelligent machines has been remarkably fast. Although artificial intelligence as a term has already been introduced by John McCarthy in 1955, its integration into our daily lives is a relatively recent development. The most prominent outcome of this evolution is without doubt ChatGPT, which raises the question: Can ChatGPT pass the Turing Test?
As before, the answer depends on the definition of passing the test, but here are some statistics to consider: Computer scientists from the University of San Diego published a paper in May 2024 titled: “People cannot distinguish GPT-4 from a human in a Turing test”. Even though the title gives away the answer, it is surprising that GPT-4 only achieved a success rate of 54 per cent, meaning it could only convince slightly more than half of the interrogators that it was a human. This alone might not seem like it is a striking result but considering a human can only hit a success rate of 67 per cent in the same test, a test created to distinguish humans from computers, the score of ChatGPT is impressive. At this point, is the question really whether or not a machine can trick us into believing it is human or is it whether a human being can convince us that it is indeed human?
What now?
When it comes to artificial intelligence, numerous questions arise: What distinguishes a “real” poem from a poem written by an AI model? Is there even a difference? Will this new form of computational intelligence alter our perception of reality once and for all? Is artificial intelligence as real to us as the Creature is to Frankenstein, or is it as fictional as Frankenstein to Mary Shelley?
I don’t know the answers to these questions, and I don’t know if the Creature would be able to draw Mary Shelley. But I do know this: ChatGPT can draw Alan Turing.