Have you ever heard of the Thesus Paradox? I mentioned this briefly a few articles ago, but should you have not read them, if I dumb it down to its most commonly known form, it may jog your memory.
If you have a ship, and you keep replacing parts of that ship, when does that ship stop being the same ship? At what point is it a different ship?
This question has been asked in various forms, and today I’m going to give you this one:
If you have a human being, and you keep replacing parts of that being with mechanical ones, at what point does it stop being human? Do you think it’s when the heart is changed? The brain? The soul? OK, maybe that was a bit too philosophical because I can tell you for a fact that contemplating the reality of what a person’s soul is is way out of my academic skill set, but this question is going to lead us quite nicely into today's topic.
Artificial human beings. Humanoids, if you will. Do you think they’re human? Your virtual assistant isn’t human. Your phone isn’t human either, despite the fact it probably knows more about you than your family, friends, or even you do. But what if you gave it a human body? What if it not only talked and thinks like a human but walks like one too?
I suppose this leans into what I briefly touched upon in my last article; just how much we as a species are willing to accept change. Are willing to accept that robots may be the future.
Thinking like a Human
Descartes came up with the phrase ‘I think, therefore I am.’ This is a pretty ontological approach to life, and it basically insinuates that we are existing because we think. So if thinking is what makes us us, then is your Alexa not already a living being? That thinks. What about your phone? That thinks too.
When you look at it that way, there are a lot of things in the world that think. Even Chat Bots, which may or may not be your worst enemy. Chat Bots think about what you tell them and give you a solution based on a set realm of information that they know. My issue with them, however (and I think this is most people’s issue with them too) is that they tend to not understand what you’re trying to say. In the end, you’re left trying to tell the Chatbot in every way possible to ‘put an adviser on’. If the Chatbots were a lot better at thinking and coming up with answers to what you tell them, they probably wouldn’t be so annoying half the time.
The way that the human brain thinks is that it processes the information given and then compares it to what it already knows or suspects, before formulating an answer. But chatbots seem to only understand certain specific sentences, and any slight variation in the sentence leaves them just about as helpful as a newborn baby.
Chatbots have an application layer, a database, and Application Programming Interfaces. They have conversation logs inputted by their developer, which allows them to recognize certain phrases. They do this by breaking the text up into phrases and choosing keywords, which will match what they know. Once they have recognized the phrase given to them, they can then decipher which response they are supposed to give, also input by the developer. This means that the issue with them also lies with their ability to comprehend what we give them.
Humans also think based on what information we are given, but Chatbots seem to have an extremely limited breadth of this compared to us. So when we consider it this way, chatbots are certainly not human. While they can think, they do so in such a limited capacity that I struggle to consider thinking so much as sorting information based on what it knows.
But if we were to improve a Chatbot’s ability to think, could we perhaps go as far as to consider it somewhat human? The general method of evaluating chatbot performance is through human evaluation, which can be expensive and difficult to gather. In one piece of research, testers developed a scoring model which would predict user satisfaction with the chatbot on utterance-response tuples. The test showed promise, and they noted that while the test was completed on FAQ-type chatbots, it could be generalized to include the sequential nature of a conversation (the exact thing that chatbots seem to have an issue with) by creating a more complex score model.
Alternatively, Jeesoo Bang created a chatbot centered around Example-based Dialogue Management (EBDM) with a personalization framework using long-term memory. (Basically, he gave the chatbot in question a long-term memory to remember its previous conversations, a lot like how you’re brain may think back on previous conversations as it responds to a new one.) These EBDM systems mean that the pragmatics of rule-based systems are thrown out the window, and the chatbot is given more wiggle room to learn like real Artificial Intelligence. Research from Shaikh et al showed that this type of chatbot showed a lot of promise for improving their overall systems. If we gave chatbots better ability to think like in these aforementioned tests, do you think you would consider it human? Or is that still too much of a stretch of the imagination?
If it is, then what if we made it look like a human?
Human Form VS Free Will
Human-like robots are often known as ‘Humanoids’ or ‘Cyborgs’. They have free will, which is probably exactly what our robots are missing. We tend to create robots with a purpose, and we make them look a certain way that’s best for that. Take the ‘I just can’t help myself’ robot, designed to scrape his essential oils back towards himself to continue functioning (which is really really sad, when you consider the fact that this robot’s whole existence continued with the purpose of keeping it existing), who simply had an arm a scraping looking tool as a hand. It didn’t need anymore or any less to perform its function, it looks exactly as it needs to fufill its purpose.
But when we consider it that way, then do we need to give our robots human form at all? If robots are made to look in a certain way to function efficiently for their purpose, and our purpose was to create a Human-Robot, then could we even consider it as such?
Is the image to the side a human to you? Somehow, I suspect that you struggled to answer a definitive yes. And that’s kind of sad. Even if we went so far as to make it look like a human, you probably wouldn’t consider it as much. You don’t have to walk to be human, so it can’t be that you want to see it get up on its legs and walk. We crossed the talking issue long ago when you started talking to your phone and letting it respond. As for thinking, we covered that earlier also.
Earlier, we touched upon how chatbots should be given more room to think for them to be better at their purpose. Does the same apply to humanoids. Some define them as robots that are ‘self controlled by an electronic-artificial brain that has free will.’ And as we just touched upon, we always create our machinery with an end goal. Sometimes they can think, but only in ways that aid ourselves. When was the last time you told Alexa to do as she likes? Never. In our eyes, robots and machines are made to make our life easier, not to live life themselves. We’ve never really gone so far as to give any machinery actual free will. McCarthy noted that human-level AI needs the robot to reason its past, present, and future. Its thought processes need to be determined by its environment, not its internal structure and programming.
We’ve given them artificial intelligence (that is, we’ve allowed them to learn and adapt to situations) but we’ve never really allowed them to learn anything past what we need them to. We’ve allowed robots to predict the next moves we’ll play, but we haven’t allowed that same robot to learn how to tell the time or wash the dishes. At the end of the day, even they are still shackled by the responsibilities that we give them.
But if none of those are your requirements for creating a defacto human, then what is inside that counts? The heart, brain, blood? In a way, a machine’s blood is the oil, its veins are the wires, and its heart could even be considered as any cogs or gears that power its systems. At the end of the day, whatever you consider human is entirely up to you. And honestly, I don’t even think I could tell you my own opinion on the matter because writing this made me realize just how similar machines are to us.
You never know; 40 years from now, we may have all been turned into Cyborgs! The future is a mystery, after all. Or maybe, we'll still be the same human as we ever were, even if all our parts are completely robotic.
Levin, N. (n.d.) Ship of Theseus. Philosophical Thought. Available at: https://open.library.okstate.edu/introphilosophy/chapter/ship-of-theseus/#:~:text=The%20ship%20of%20Theseus%2C%20also,from%20the%20late%20first%20century. (Accessed: December 6, 2022)
Nolan, L. (2020) Descartes’ Ontological Argument, StaEncyclopedialoedia of Philosophy. Available at: https://plato.stanford.edu/entries/descartes-ontological/ (Accessed: December 6, 2022)
Duijst, D. (2017) Can we improve the User Experience of Chatbots with Personalisation, ResearchGate. Accessed at: https://www.researchgate.net/profile/Danielle-Duijst/publication/318404775_Can_we_Improve_the_User_Experience_of_Chatbots_with_Personalisation/links/5967ba16a6fdcc18ea662ce7/Can-we-Improve-the-User-Experience-of-Chatbots-with-Personalisation.pdf (Accessed: December 8, 2022)
Bax, C. (n.s.) Watching Can't Help Myself is like looking at a caged animal, Hypercritic. Available at: https://hypercritic.org/collection/sun-yuan-peng-yu-cant-help-myself-review/#:~:text=The%20artwork%20Can't%20Help,lines%20such%20as%20car%20factories. (Acessed: December 10, 2022)
M, R., Baker, L., Baker, B. and Fontana, J. Evolution of Life Forms in Our Universe, Journal of Modern Physics. Available at: https://www.scirp.org/journal/paperinformation.aspx?paperid=114163 (Accessed: December 9, 2022)
McCarthy, J. (2000) Free Will - Even for Robots, Stanford University. Available at: http://jmc.stanford.edu/articles/freewill/freewill.pdf (Accessed: December 10, 2022)
Comments