Artificial intelligence is undoubtedly one of the most convenient innovations of recent years. Large Language Models (LLMs) such as ChatGPT, Google Gemini, Claude, etc., allow people to perform most of their daily digital tasks: from writing complex texts, summarising endless chains of emails, to carrying out simple research they do not want to do themselves in order to save time.
By far, the most convenient feature of artificial intelligence is that it can deliver any type of content within a few minutes at most. However, the information provided is not always accurate, since this kind of software is trained on vast amounts of texts (from the internet and other various sources). Data are not stored in a database from which the system directly retrieves answers; rather, they are used to teach the model linguistic patterns and relationships. When it receives a query, the model applies these patterns to generate the most precise and plausible answer possible.
It is important to underline that language models do not truly “understand” the content, for they lack awareness and the ability to verify facts. They generate responses by calculating, through complex statistical analyses, the most probable and coherent sequence of words relative to the context.
In recent years, AIs have evolved exponentially to the point that not only are they able to perform almost any digital task, but there are even multiple types of AI capable of doing the same thing differently. It is therefore possible to select the best model available for each specific task. Many AIs are not only able to write texts that appear human-like, but can also generate images in different styles, closely following the user’s prompt. This has sparked numerous controversies within online artistic communities, as it is argued that AI, by drawing from vast quantities of works available online, ends up exploiting artists’ works without consent, raising questions about intellectual property and copyright protection.
Two Extremes
The world is fundamentally dividing in two: on one side there are over-enthusiasts of artificial intelligence, those who use it for everything; on the other side, there are people who fear AIs will take over the world. Both perspectives are in part valid and in part flawed.
Many AI enthusiasts – as stated by Sam Altman the CEO of OpenAI – often make the mistake of sharing sensitive data with it, thereby exposing themselves to possible identity theft in case of hacker attacks or data leakage. This happens because people tend to humanise artificial intelligence too much, attributing to it characteristics of consciousness and will that it does not and cannot possess. This phenomenon calls for deep reflection on the nature of being and one’s responsibility in the use of technology.
Descartes, with his “cogito ergo sum” – I think, therefore I am – theorises that humans exist as thinking substance (res cogitans), distinct from matter (res extensa), and that exclusively through thinking people affirm their existence; however, this does not apply to today’s artificial intelligence which, although programmed to have an interface as human-like as possible, is not capable of developing true thoughts. At the same time, Kant states the phrase at the basis of European Enlightenment: “sapere aude!” – have the courage to use your own reason! – urging people to have the courage to use their own intelligence without uncritically delegating the understanding of the world to others, whether they are human or, nowadays, machines.
The human interaction with artificial intelligence can take the form of a master–slave dialectic, as described by Hegel. By humanising AIs, people risk becoming servants to their use, subordinating their autonomy to the functioning of a machine. At the same time, this relationship could be transformed into an opportunity for growth: by subordinating the self to technology, one could discipline critical thinking, produce new knowledge, and affirm free self-consciousness. But this would require awareness, courage, and a constant questioning of the boundary between right and wrong.
The myth of the Golem of Prague thus becomes a warning for all users of artificial intelligence: a creation born for a “good” purpose can degenerate if morality is lacking, if will and critical judgement are delegated without due attention.
The Illusion of the “Android’s Conundrum”
As for the extreme fear some people have towards artificial intelligence, this can be considered unfounded, as such fear essentially stems from the idea that some string of code might take over the world entirely, without considering that programmes can be stopped even immediately, nor taking into account the regulations that have recently been drafted and are still being completed regarding the use of AI and user protection.
Both errors therefore stem from the excessive humanisation of AI by its users, who attribute to it the capacity for thought, will, and self-consciousness that it does not possess and can never possess. In doing so, they make it undergo, at least in their imagination, a sort of “Android’s Conundrum”, a paradox of living without truly being alive. However, this dilemma is a mere illusion for as already stated, AI does not think, does not have consciousness, and above all, does not possess morality. Artificial intelligence is nothing more than an algorithmic construct that processes according to rules set by programmers, and therefore does not possess any form of subjectivity. The Android’s Conundrum thus reveals itself as an illusion born from the entirely human need to give a soul to an amorphous and incorporeal tool.
The Virtuous Middle Ground
The problem in the relationship between humans and artificial intelligence has been granting full access to all users worldwide, and without any distinction based on age or profession, without first properly raising awareness among potential users; allowing AI, through a spontaneous process of “natural selection”, to be used primarily by those who truly find it useful. As a result, AI has entered people’s lives forcefully, generating, on the one hand, fear and, on the other, an admiration bordering on idolatry.
The solution to this problem can be summed up in the simple Latin phrase “in medio stat virtus” – virtue lies in the middle – meaning that it is right to make use of AI by treating it as the tool it is, in a rational way and without fearing it, without delegating thinking in our stead.
SAPERE AUDE!
Costanza R. Corsi
