In short, both of these missing ingredients for AGI are more of a waiting game than a question of if it’s possible. As these technology areas improve, AI and subsequent robotic versions of AI will slowly become available and more prevalent. In many ways, humanity will accept these introductions as the normal progression of things. And even when these AGI’s do arrive in our mainstream daily lives, they may be the grunts of humanity — acting but seeming lifeless without a personality.
However, one aspect of AI getting less attention, perhaps the next big thing, is artificial sentience. Sentience can be divided into two categories. The first category is awareness. This one would keep the high-minded AI geniuses busy trying to pass the Turing test. If the machine is self-aware and aware of others in a person-like way, we will have new ethics rules to consider.
The second category is emotions or, perhaps, feelings. Remember, we are talking about artificial sentience. So, in this case, we are referring to artificial emotions. I’m not arguing for an actual emotional experience, but the capability to show the proper emotions to communicate with humans. So, the machine may emote sadness when conversing with someone who just lost their job. It may portray outrage in the conversation when it hears the person was dismissed without cause. And it will correctly display joy upon learning this same person has just landed her dream job. “Sounds like your layoff was a blessing in disguise,” it might say.
As far as I can tell, this second category of sentience — believable emotion — is the future of AI, not general intelligence.
For the rest of this article, when I use the term sentience, I’ll use it to refer to this second category of sentience — artificial emotions or feelings. I believe this kind of sentience is the future of AI for two main reasons. First, people will trust, relate, more comfortably interact with, and even endear themselves to an AI that relates to them through emotions instead of just intellectually. Second, an AI portraying realistic emotions verbally or in writing can pass as human in short, transient conversations.
Have you ever read a book and become emotionally connected to the protagonist? You know the person is fake, right? He or she is a work of fiction. But once you’ve connected to them emotionally, you care about them in the same way you care about your real-life friends. The fictional story has stirred your emotions. The story may have been made up, but the emotions you felt from it are real. If the author has done a superb job of storytelling, your real emotions make you care about the characters in the book. Yep, emotions are like that.
Transition to a scenario where you are interacting with an artificially sentient AI. The emotional connection between you and it will grow because its emotions seem genuine. One day, future you will go to the store and interview a new robotic assistant. Perhaps the fine motor control isn’t even available yet. You’re just talking to software in a black-box. The first black-box you interview is more than capable of taking care of all the tasks you need it to do, but it doesn’t use any form of emotion when speaking with you. You interview another black box; this one has a bright red pinstripe down the side to show its style. It does everything the first box can do — plus, when it talks to you, it genuinely seems to care.
pt 2