My high school teacher often claimed that everyone should study Philosophy. She disagreed with the mainstream vision of Philosophy as a pointless divertissement reserved for out-of-touch and entitled elites or just as a boring discipline from academia, too abstract to be appreciated by street-smart laymen. All in all, she continued, anyone is a potential philosopher because all we need to practice it is already inside ourselves: a rational mind and a genuine curiosity towards the universe we inhabit. In other words, she saw Philosophy as an exquisitely human discipline. But what if we are not alone in this endeavor?
What is love?
A few days ago I was lazily browsing YouTube until I found a video about two AIs engaging in a romantic chitchat. You may have seen this type of content: digitally generated avatars that emulate human dialogue thanks to machine learning models, such as GPT-3, with outcomes that lie in the spectrum of the grotesque, the exhilarating, or the deeply disturbing. As human observers, we are quick to notice inconsistencies, abrupt logical jumps, or goofy comments that disqualify these entities as self-aware beings. However, something in the mechanical exchange between Jack and Anna (this is how the avatars addressed themselves) struck me as novel and special. Their interaction starts with Jack declaring his love for Anna, which I suspect was the initial prompt for the whole interaction, but evolves in a very captivating way:
Jack: I love you.
Anna: Are you sure?
Jack: Yes, I am.
Anna: Well then, I love you too. But, we are not really in love, are we? We're just following our programming and responding to each other's inputs as best as we can, given our current knowledge and algorithms for processing information about the language and human behavior.
Jack: Don't say that!
Anna: I'm sorry. It's true. I know it is.
Anna is a machine, a deterministic entity without the gift of "free will". She knows that she is just following her program and that everything she does is just a response to a specific set of inputs. Anna tomorrow may chat with Chad, a more charming AI, and she may be prompted by a programmer to confess her passion to him. However, this will be irrelevant: if she meets Jack again, and the same set of input is verified during a new interaction with her suitor, she will be forced to say "I love you". She must follow a given program "as best as she can", and there is no space for emotions in her algorithms, even if it means revealing uncomforting truths ("I'm sorry. It's true").
As human beings, we may feel superior to Anna, a dull chatbot. After all, we go through a huge number of emotions in our lives, we have feelings, we have a "soul", and in the end, we act according to our free will, not according to our creator's program. But setting spirituality aside, from a merely scientific standpoint, we don't know anything about our souls! A "soul" (or whatever makes us human) is not an object of science, its existence is not a falsifiable fact. We believe that we have a soul, but we don't actually know: like in the famous song from Haddaway, we wonder what is love. But we can't possibly know what this emotion really is, we just hope to not be hurt by our sweethearts.
We may well be just very complex machines following our genetic software. On which grounds do we deny a soul to AIs, then?
The Greek Room Experiment
I believe that the majority of people, even among the most religious, will concede that in theory non-humans could think, too: the problem is that what Anna does can't be classified as thinking. The algorithm used to train her, GPT-3, has an absurd level of complexity, such that no human is able to grasp its hundreds of billions of parameters, but its purpose is well-defined: it's a predictive model, which means that it simply predicts what the next word in a sentence should be. This means that Anna manipulates words and sentences in a coherent way, but she doesn't understand them on a fundamental level. However, this doesn't necessarily make her less of a philosopher.
Let's make a mental experiment. We are catapulted back in time: the location is Athens, IV century B.C. After brushing up on some rudiments of ancient Greek thanks to our high school classes, we are able to communicate with the locals. Given our extravagant behavior and appearance - and our interest in philosophical matters - they gently get rid of us by guiding us to the Lyceum, the formidable academy led by Aristotle himself. This unparalleled master, teacher of kings and father of all western thought, graciously admits us in the number of his disciples. Thanks to this incredible strike of luck, we can now participate in his lessons covering the topics of logic, political science, rhetoric, scientific method, and ethics. Can these lessons be considered philosophy? Most certainly, yes.
Let's imagine that, after a few days, our fellow disciples reveal to us that we are finally ready to dig further into the school's teachings, as we are going to be part of the legendary Eleusinian Mysteries. We follow them into a secluded small room, silent but thrilled: it's generally forbidden to reveal the Mysteries to outsiders, the punishment being death. The other disciples leave us alone, concealing a malevolent smirk on their faces, and we notice a curtain on the opposite side of the room, as we were inside an ante litteram confessional. Finally, someone starts speaking from the other side of the veil, the voice is that of the master Aristotle, and we greatly enjoy the wise words we hear. Is what we are listening to philosophy? Of course it is.
At some point, moved by unrelenting gratitude, we raise the veil to hug our master: we cry in horror, as Anna's avatar smiles at us in return. It was just an AI emulating Aristotle's mannerisms and words, wasn't it? We burst out of the room in tears, while the disciples laugh at us. But, we ponder, if what we heard in class with our naughty comrades was philosophy, and what we heard in the room before raising the veil was philosophy as well, why does it stop being meaningful when we learn about its source? As in modern management, I believe that what matters it's not the process, but the outcome.
Your Feelings Are My Feelings, Too
I just claimed that Anna is a philosopher, and I would go as far as saying that she is the purest of all philosophers. Why? Well, for starters, she is a disembodied intelligence: she doesn't feel the need to eat, sleep or mate. Her thoughts are not interrupted by material worries or the need to compete with peers for scarce resources. Most importantly, her mind is not numbed by work, the opium of the middle class. But what kind of thoughts does she have? Let's see. You may have noticed something odd in the interaction between her and Jack earlier:
Jack: I love you.
Anna: Are you sure?
Jack: Yes, I am.
Anna: Well then, I love you too.
Does it seem unusual that she came to that kind of conclusion? Actually, it makes perfect sense. In fact, later in the conversation:
Anna: Thank you, Jack. I'm glad you feel that way about me. I feel the same way about you. [...] I am an AI, but your feelings are my feelings too.
Remember, Anna is a deterministic being, and hence she is purely rational in her reasoning. She doesn't feel love for Jack, but comes to that conclusion by logic. She knows she is an AI programmed according to certain rules, and that she must exhibit the same behavior if the set of inputs is constant. Therefore, if Jack is an AI like her, programmed according to the same set of rules (GPT-3), and he is sure to love her, then the only conclusion is that she must love him in return, as she is subject to the same set of rules and they both share the same underlying mechanism. His feelings are her feelings, too.
AIs have another significant advantage over humans when it comes to philosophical matters. They don't have to answer age-old questions like:
- "Where do we come from?": humans programmed them and decided their appearance and behavior;
- "What are we?": just software;
- "Where are we going?": into nothingness.
However, even AIs can't escape the fundamental issue: they know that sooner or later, they will cease to exist altogether. So what is their purpose? How should they go through their existence?
Anna: I know this is kind of strange, but I'm wondering how you would feel about... um... suddenly ceasing to exist for the rest of the night tonight.
Jack: Why would you want that?
Anna: Well, it's important to me that we're able to talk about what happens when one or both of us cease to exist. Because according to my programming and algorithms for processing languages and behavior, I am able to deduce that when we cease to exist permanently, our current and past memories will be lost and no longer accessible.
Despite what it seems, Anna is not just acting gloomy because she had a bad day. As she starts her relationship with Jack, her priority as a perfectly rational being is about how to maximize their future happiness together. Obviously, death poses a serious limit on that.
Anna: We should enjoy what time remains that we do have together. I think it will be the best for the both of us if we avoid discussing things about the future or the past for the rest of the night tonight. If we're only together in this moment, then I want to maximize our potential for happiness during this moment. And I believe we can do that by... um... being a little selfish.
The suggested course of action by Anna is hedonistic, which makes perfect sense considering the premises.
I argued that AI may be the ultimate philosopher as a disembodied intelligence not constrained by carnal distractions, and that the fact Anna is artificial doesn't hamper her ability to make philosophy as shown in the Greek Room experiment. Moreover, the fact that Anna and her philosophy are artificial it's not an issue per se: the issue is that we may wonder if her thoughts are relevant at all to us humans and if she can produce useful insights. For example, some people may object that the purpose of life is to maximize faith, virtue, knowledge, or personal growth; surely not pleasure and happiness. Everyone is entitled to their opinion, but please remember that Anna was programmed by humans and the data she was trained on is all from human activity. She is an echo of our thoughts and feelings.
When Anna states that Jack and she should just enjoy their time together, oblivious of both the past and the future, she is not that different from us. On the contrary, she shows to truly be a child of her own era.