By Midierson Maia, PhD

The article Computing Machinery and Intelligence published by Alan Turing in October 1950 is an important milestone in Computer Theory, and also in the history of artificial intelligence. The article starts introducing the Imitation Game, broadly known as “Turing Test”. Through this test, Turing aimed to discover whether it was possible for a machine “to think”, such as human beings do.

Turing (1950) argues:

I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words (Turing, 2003, p. 433).

The excerpt shows Turing trying to understand the meanings of the words “machine” and “think”. Turing started his article encouraging readers to do a semantic and philosophical exercise on these words. This is a philosophical point-of-view because the topic “body-machine” is part of Descartes’s thoughts. The term is part of the Descartes’s book Discourse on the Method (2017 [1649]) and Passions of the Soul (2017 [1649]).

According to Murta and Falabretti (2012, p. 76), Descartes’s mechanistic point-of-view explains the human body as a machine. From clocks to hydraulic systems, the comparison with the functioning of the human body made by Descartes was shaped on machines. It is important to remember that, in the 17th century, Descartes was dealing with limitations to explain the human body based on a mechanistic position. Although his theory is surpassed by new theoretical models, it does not make Descartes irrelevant.

Swimming against the cartesian tide, the physician Randolph Nesse (2016), a scientist from the University of Arizona, criticizes Descartes saying: “The body isn’t a machine”, and justifies: “Machines are products of design, bodies are products from natural selection, and it makes them different in fundamental ways. The organic complexity of bodily mechanisms is qualitatively different from the mechanical complexities of machines” (Nesse, 2016, p. 1).

Nesse argues that biological bodies and machines fail for different reasons. If machines are designed and not naturally selected according to Darwinian logic, it becomes difficult to establish a comparison between a machine and all biological bodies. This comparison would be valid only according to a religious perspective, assuming the existence of God, who could play a role as a “designer of biological bodies, which is not the purpose of this article.

Also, Randolph Nesse (2016, p. 1) states “Bodies have parts that may have blurry boundaries and many functions and are often connected to each other in ways that are hard for human minds to fathom.” In this case, the Cartesian thesis is contradicted at its core by the logical approach. The evidence of sophisticated processes within biological bodies is clear. An example of it is the immune system of bacteria, which fights viruses based on the mechanism known as CRISPR/Cas.

Through CRISPR/Cas, genetic material fragments are used by the organic system in the battles against viruses and other threats. For such functioning, there is no specific programming to guide what a particular bacteria or immune system cell will do. The system adapts according to the context, under specific conditions. Regardless of the biological body sophistication, it is still very difficult to predict a total body failure taking animals to death. This reinforces Randolph’s arguments: “Bodies have parts that may have blurry boundaries.” If biological bodies were machines, such as a car or a computer without artificial intelligence, we could simply replace a dysfunctional part and everything would work relatively well, being possible to predict the functioning and the duration of the replaced part. Machines, if maintained, can last forever, but biological bodies, in their natural state, cannot.

The previous arguments reinforce the semantic and philosophical analysis, made by Turing (1950) at the beginning of the text Computing Machinery and Intelligence. Although developing the analysis for the second term “think” classifying the biological body and the machine as distinct entities, the act of thinking is a biological body singular act, as the exercise of thought is unpredictable. Thinking cannot be predicted neither programmed.

Looms or cars do not have the capacity for decision-making. And even smart-cars can be programmed, making them predictable. Even if a vehicle obeys a random programming order, it will rarely exhibit unexpected behavior out of the programmers’ control, unless a design flaw affects the machine. It should be emphasized that the artificial intelligence dimensions have not yet been included in this analysis. The example regarding the smart-cars does not include machine learning, data-driven, deep learning, or computational vision. The purpose of this paper is to address the latest topics. To do that, the Turing’s text was chosen, because it is one of the most important references for computation and artificial intelligence.

Considering the arguments raised so far, it is possible to draw a prior conclusion regarding Turing’s questioning on the meanings of “machine” and “think”. If the meaning of “machine” refers to something mechanically and electronically planned by designers, this means that it is impossible to understand the machine as an entity able to think, because thought is unpredictable and dynamic, escaping from the human being’s control.

An example of unpredictability can be seen in religion. No matter how fanatical people are, there are no explicit assurances that they will maintain eternal loyalty to the religious values. Everything will depend on a specific context, involving other individuals present in the context. Machines do not act according to contexts. In an anthropological perspective, contexts are made by meanings from cultures, thereafter machines do not think. Machines are machines. It cannot be compared to biological bodies.

Artificial Intelligence and its Implications

It is clear that an ontological challenge was developed by Alan Turing in the first paragraph of Computing Machinery and Intelligence, by coining the famous question “can machines think?”. To conclude if a machine can really think, it was necessary a deep dive into the roots of the problem, proposing an analysis on the meanings of the words "machine" and "think".

To understand the artificial intelligence outcomes, especially in the 21st century, it is necessary to leave the classic “machine” concept, because, as a machine, it cannot think. Linguistics, Semiotics, and Psychoanalysis show that, in terms of human perception, when meanings are lost, people have to reorder their signification, reshaping the meanings according to the context. This can be exemplified by the Taboo game. If the word “machine” is eliminated from the artificial intelligence analysis, what will remain? How to deal with an artificial intelligence topic without, at least, mentioning that word? The point is: if machines do not think, then they should be kept out from the analysis proposed by this article. So, it will be necessary to use a new ontological model. A model which includes artificial intelligence thinking autonomously (itself) and randomly, such as biological bodies do, not only through the brain but wholly, including its immune system.

Considering artificial intelligence, deep learning and cognitive computing as references, terms such as “android” and “mutants” (hybrid beings) are more useful than the word “machine”. When a biological body receives a pacemaker, or a prosthesis integrated into the central nervous system, receiving commands from it, the concept of machine should be replaced.

When a device is adapted and synchronized with the biological body commands, the concept of machine is not useful, because the device becomes part of a whole system, and part of processes that ensure different types of life. Devices, such as the pacemaker, plays the role of organs and even cells.

If an entity, made with organic and inorganic matters including artificial intelligence (with high processing power based on neural networks, pattern recognition, and deep learning) achieves a natural language processing level, the world will face a new being. It will be a new species, able to think, feel, and ultimately, able to wish.

After performing this analysis on the meaning of the terms “machine” and “think, the main point of this analysis is “can a machine wish?”. Will machines be able to wish? linguistics and psychoanalysts such as Jacques Lacan and Saussure show how thought and wishes are linked. If machines do not think, they do not wish, because wishes and desires depend on thoughts. By being a machine, thoughts will not take place. If thoughts are not placed, the language will not take place either.

If the language does not take place in machines, wishes or desires are not possible, because language and desires are linked. However, surpassing the concept of machine, being opened to recognize ‘new AI entities’, working with artificial intelligence, deep learning and computer vision, everything changes, and new possibilities open up towards new possibilities for this analysis.

Sophia and his creator, David Hanson
Sophia and his creator, David Hanson

The robot Sophia, developed by Hanson Robotics, is an example of how AI entities can be understood according to the new technological scenario. Sophia is one of the most emblematic humanoids ever created. Sophia’s creator, David Hanson, has spared no effort in making Sophia the biggest asset for his company. Sophia represents a considerable advance in artificial intelligence technology, pushing the tenuous frontier dividing the concepts of animate and inanimate life.

The article Everything You Need To Know About Sophia, The World’s First Robot Citizen published by Forbes Magazine on November 7th, 2017 focuses on “robotic rights”. The law was created to ensure that human equality and justice may be safeguarded through a code. There are no legal standards regarding relations between things, such as machines, sticks, and stones. Legal standards and morals have been created to set limits on the use of inanimate things (stones, knives, etc.), such as not stabbing or stoning someone. There is no guarantee of how artificial entities will use sticks, knives, and stones.

Sophia was created to be an artificial entity, whose behavior is not predictable, but, meanwhile, Sophia is a marketing piece for Hanson Robotics. Due to technical limitations, Sophia should not yet be considered an artificial entity. It is currently a predictable and limited machine. Upcoming artificial entities will act according to the context and references received. The issue of “robotic rights” may be ethically discussed according to a scenario in which artificial intelligence is rapidly evolving. It is revealing a frightening and exciting situation, given the countless possible uses for artificial intelligence. Sophia’s first appearance in newspapers around the world caused a dystopian panic. There is a fear of AI machines acting without human control, achieving a point they can rebel against human beings, causing a mass slaughter, as shown in dystopian films and series such as Terminator and Black Mirror.

Considering the fear caused by dystopia, it is important to look to this feeling scientifically, seeking to understand what lies beneath the fear of dystopia. Based on psychoanalysis, it is possible to perceive that the fear of artificial intelligence is linked to the fear of how machines could “wish” to destroy humanity. Taking the analysis to an optimistic point- of-view, could machines want to save mankind rather than destroying it? Based on advanced projects in robotics and artificial intelligence, such as Sophia, it can be said that, shortly, both perspectives can become real.

John Connor and the T-800: a representation of what can be an artificial entity.
John Connor and the T-800: a representation of what can be an artificial entity.

The movie Terminator 2 shows how the same character, played by Arnold Schwarzenegger in Terminator 1, returns to defend John Connor from a massive slaughter. The movie presents several situations in which it is possible to observe intelligent “machines” (not machines, but entities) taking sophisticated decisions, such as the human decisions made by Sarah Connor and her son John Connor. Throughout the narrative, there are several moments in which humans and artificial entities (seen in the film as machines) appear sharing meanings through cultural dimensions, necessary to keep the culture and support wishes, thoughts and desires.

In the same movie, John Connor demonstrates a strong feeling for the entity T-800 (terminator), played by Schwarzenegger. He cries when the robot decides to exterminate itself in a cauldron of liquid metal. It is important to emphasize that suicide is an action made by human beings. Suicide is an act that arises from symbolic and imaginary disorders. Under psychoanalytic perspectives, suicide can be caused by a conflict with reality, based on a lack of meanings necessary to hold the reason of life. If an upcoming AI entity, based on a non- programmed decision, decides to extinguish itself, so this entity will be very close to human nature.

It is important to highlight that the analysis above comes from possibilities, but it is useful to perceive that the development of artificial intelligence evolves towards the creation of a being like us. Researches on deep learning and computer vision, based on pattern recognition, seek the development of entities able to think and feel as human beings do.

The philosophers Nick Bostrom and Eliezer Yudkowsky, in the article The Ethics of Artificial Intelligence, published in 2014 by Cambridge Handbook of Artificial Intelligence, argue: “Two criteria are commonly proposed as being importantly linked to moral status, either separately or in combination: sentience and sapience (or personhood)” (Bostrom & Yudkowsky, 2014, p. 322).

According to the authors, sentience is the capacity of phenomenal experience or “qualia”. In other words, sentience is the capacity of feeling pain and suffering. If an entity (not a machine) enabled by artificial superintelligence feels pain, thus the entity has sentience. Also, sapience means the ability of an entity to recognize itself, as human beings do.

One fictional example of artificial entities able to hold sentience and sapience are the characters of the series Westworld, from HBO. Characters such as Maeve and Dolores Albernathy are almost human. They feel pain and suffer. They have dreams, memories, and desires. The example is fictional, however, in terms of AI development, everything major tech- companies are doing now is evolving AI on the same pathway.

Following the analysis on the artificial intelligence outcomes, it is important to remember that some technologies made by human beings have initially a state of nullity. This means that the technological outcomes will depend on their intended use. It is a fact that the same happens with artificial intelligence. It is important to remember that, although not exactly in the same way, the dystopia in Terminator, Black Mirror, and Westworld series, can be achievable. It will depend on how, and by whom, such technology will be manipulated and which moral status the new artificial entities will carry on.

Be Right Back is the Black Mirror episode that presents a dystopic context in which the character Martha orders a copy of her boyfriend Ash (who died in a traffic accident) from a start- up specialized in artificial intelligence. Based on Ash’s browsing data on social networks, the company sends to Martha an AI entity working according to software that simulates the boyfriend’s actions on social networks, but in real life.

In the episode, Ash’s avatar feels fear, cold, and also is able to wishes. The episode shows that the formation of desire was only possible by obtaining a copy of Ash’s actions (language) from his social network records. It is important to remember that, in the series, the Ash’s actions in real life are based on his actions on his social media profile. Artificial intelligence, in this case, is what makes it possible to make the Ash’s avatar. However, to create the avatar, machine learning, and NLP — Natural Language Processing were deployed simulating the human natural language.

Considering some overstatements, the example can already be seen in chatbots and social bots, which learn according to human interactions in social networks. The term “bot” comes from clipping the first syllable from the word robot. The term is not a new one. After being applied to the social network contexts, the meaning was reshaped. Such as any software, which includes algorithms, each bot has a function and purpose. There are bots legally made to be used by companies for customer services purposes. Also, there are social bots used to simulate other types of human actions on social media.

Social bots are not used to manage content on networks, but rather to deceive users. Social bots are often made to act such as humans, encouraging them to produce a certain type of content. Although social bots do not manage content, they may influence a social media user’s perspective about reality and their wishes, driving actions.

By impacting the content produced by humans, changing the meanings, social bots can reshape people’s interactions. Depending on the influence caused by these bots, impacts can be increased. The level of impacts will depend on the numbers of social bots controlled by botmasters (humans who spread and control a large number of social bots in networks).

When the Turing Test is analyzed according to its purpose, is it possible to say that the test could be beaten by these “actors” simulating human actions on social media? Based on this perspective, it is proved that social bots can deceive consumers and voters around the world, driving not only their wishes but also their fears.

The Cambridge Analytica is a case about data-theft and Facebook fake profiles. The case is useful to show how humans can be deceived and manipulated by algorithms. From this perspective, it can be said that artificial intelligence has already given several indicators of how capable it is to beat the Turing Test, playing with the desire of millions of people.

Intelligent, Artificial, and Cognitive Entities

Psychology and neuroscience help us to understand how the human brain works to recognize patterns previously recorded through contextual experiences undergone by individuals in their environment. Linguistics is another area that contributes to understanding how an individual, based on the language, conceives reality according to the production of meanings. Language is used by individuals to support them in the world. Also, languages work shaping realities. It is important to highlight that wishes and desires come from the interaction between people and reality.

Lacanian Psychoanalysis is useful to understand how patterns influence behavioral models, wishes, and fantasies, shaped by language in the social sphere. Social relations, wishes, fantasies, desires, and behaviors arise based on symbolic structures before individuals born and it is shaped according to social norms and values. It serves as a support for individuals to conceive their perception of reality.

The artificial intelligence developments, including computer vision techniques and pattern recognition, continue towards human brain imitation, working, not only as simpler operations, such as mathematical calculations but also simulating how humans learn and understand their realities. The awakening of artificial entities, able to wish, may happen when entities achieve a capacity to recognize themselves and process meaning as humans do.

A simple search on Google Scholar shows that the term “Pattern Recognition” is becoming more widely used by researchers in their scientific articles. It is present in works related to the computer science and robotics areas, extending also to psychology of learning. It is demonstrating how this concept has been used in studies applied to artificial intelligence. These advances have aroused human sciences interests, such as philosophy, psychology, and psychoanalysis.

Christopher Michael Bishop, a computer scientist, and professor of computer science at the University of Edinburgh has devoted significant work to the subjects pattern recognition and machine learning. His most important work, entitled Pattern Recognition and Machine Learning (Bishop, 2006) shows how the academic community, in association with large technology companies, has been dedicating time and resources to create advanced systems in computer vision involving pattern recognition.

Taking this analysis to an empiric dimension, it is possible to highlight cases in which artificial intelligence can already recognize, through deep recognition and interpretation levels, details from graphical images containing signs previously constructed by human beings.


The Microsoft Seeing AI project shows the power of computer vision working through pattern recognition. The application was created to help blinds recognize various types of objects in front of the observer. The menu is simple and provides only the functions users want to use. The application can also be used to read texts on documents. The acts of the machine can be compared with the acts of the human brain. It is an initial condition for the creation of a computational system intelligent enough to understand and process signs and language. This is a new scenario, in which artificial entities could become entities able not only of imitating their creators in elementary tasks, but also able of processing reality as humans do.

The article Artificial Intelligence and Sign Theory (1989), published by Jean Guy Meunier, is an interdisciplinary effort to approximate artificial intelligence, philosophy, and semiotics. Meunier highlights the importance of interdisciplinary approaches, and presents the data necessary to show how artificial intelligence is no longer an exclusive topic of STEM programs. The author also emphasizes how important it is to keep attention on the “manipulation” of meanings that are part of communication processes between humans and artificial entities (seen actually by the majority of researchers as “machines”).

Meunier (1989) argues:

What is the interpretation to be given to these symbols that are ‘manipulated’ by a computer in an AI system. The answer here seems to be related to the representational function these symbols play in the processing system. According to Haugeland (1986: 28) they represent something of the outside world or to Newell and Simon (1976) an internal process of some kind by which some action is undertaken (Meunier, 1989, p. 8).

Artificial intelligence (based on algorithms that simulate human communication actions in the digital environment) is impacting communication and language. AI systems are developing themselves among daily social practices. It helps researchers to understand how social media users do not perceive the presence of AI and neither the social outcomes caused by the social bot interference on social media conversations. When bots affects the production of meanings, they can participate in conversational acts, engaging human and artificial entities to shape (and reshape) the reality, where desires are made according to meanings and values.


The emergence of machines able to simulate the human brain is notable. Although machines can simulate the human brain, this does not mean that machines can think neither wish. The development of AI shows how humans are pursuing to create entities that could be our image and likeness, performing a range of tasks, similar to human actions. The goal of AI tech companies is to create entities that can achieve advantages in some tasks, such as recognizing the face of a person in a crowd. The Chinese security system has proven how computational vision performs this power when they are trying to find someone to arrest.

The reports on artificial intelligence from different media channels have served to inform and publicize the work of some start-ups, such as Hanson Robotics. However, the information created and repeated by media is often exhausted in a vague debate about practical and market applications involving artificial intelligence.

The analysis of the technological outcomes, from different technologies, such as artificial intelligence, quantum computing, and genetic engineering, is something that only academia does. The dystopian films and series provide a perspective on how technology could affect the future. Considering some limitations between reality and fiction, movies and series can help researchers imagine the impacts and outcomes, however, it is important to remember that boundaries between fiction and reality must be considered.

The movie Terminator, made more than thirty years ago, provides a perspective on how AI machines could be materialized. The machines created by Hanson Robotics and Boston Dynamics do not work as the Terminator. However, some features are considered similar, such as pattern recognition and computer vision.

To evaluate scientifically social outcomes of AI it is necessary to analyze the problem with human sciences lens. Analyzing technologies with anthropology, semiotics, psychoanalysis, philosophy, and sociology lens helps researchers keep their focus on the human being as the main issue. Taking the issue with the human science lens helps scientists understand how human desires could be interlaced with artificial entities’ wishes, which will (soon) cease being machines to become new hybrid species.

The HBO’s Westworld series demonstrates how a clash between the robot’s wishes and the human beings’ wishes could take place. A future filled with artificial entities, able to wish, will only be possible if such entities reach an advanced stage being capable of processing meanings and values as humans do. Besides, regarding the existence of wishes in such entities, it is important to emphasize that wishes and desires needs objects to satisfy individuals.

The psychoanalysis offers a theoretical corpus to understand the formation of wishes and desires, also psychoanalysis helps us to understand the role of objects previously shaped by meanings. When artificial intelligence achieves its full development, becoming artificial superintelligence and towards artificial entities able to understand and process meanings, humans will finally get the answer to the question “Can a machine wish?”. Meanwhile, everything researchers have is a long avenue to be pavemented with interdisciplinary scientific approaches, including necessarily human sciences in their artificial intelligence studies.

In terms of AI researches, science may pursue a better and well-defined course, trying to understand not only the current machines, but also the upcoming artificial entities, being made to be our image and likeness, able to think and, above all, able to wish.

Midierson Maia is a Ph.D in Communication, Professor, Entrepreneur, and founder of Preemptor AI Canada. Preemptor uses artificial intelligence to help students succeed through originality:

Midierson arrived in Canada in February 2019. In January 2020, his startup was approved for the Canadian Startup Visa Program, one of the most competitive permanent residence programs in Canada.

Currently, his start-up is being accelerated by Spark Centre, In Oshawa, ON, Canada.

Talk to him:

Entrepreneur, Founder of Preemptor AI. PhD in Communication Studies. Professor at Humber College, Ontario, Canada.