A Word On Emotional AI
The study of AI has come a long way in the past 20 or so years, from machine learning to neural nodes. We’ve been able to roughly parse languages into an almost mathematical format, allowing for programs to understand human input. This is seen in products like Amazon Alexa, Apple’s Siri, and Google’s Assistant. But there’s one thing that hasn’t been understood a lot: How do we get AI to show emotion?
While we’re able to get a good estimate at what kind of emotion an input has, we haven’t been able to really build an AI around the emotion that is calculated from that input. We’ve simply plugged in the variables and taught the program what an appropriate response to those emotions should be. Those responses tend to be taught over things like repeated machine learning; ones they’ve seen examples of. But what does it take for the AI to come up with it’s own emotional response to the input? Where do we even begin with that?
First we have to look at where the human emotional responses come from and what nurtures them. If you look at Elizabeth Phelp’s article on emotions and memory, we can read in the beginning that it’s pretty certain that certain emotional stimuli can bring attention to memories with the same emotional value. We’ve seen cases of this with PTSD patients, where once introduced into the stimuli, their behavior is altered with the memories tied to the emotion taking strong hold over the person. Looking at it from a straight-forward stance, how would we replicate this within an AI? One commonality throughout all possible scenarios would require LSTM. One way we could do it is to persist the median emotional value of the input with the entities of the input. This means that throughout the data-sets, we’d need an emotional value tied to common emotion stimuli (i.e.: death, violence, or someone’s name). By doing this, we can raise the emotional levels of the AI and depending on the response, keep those emotional levels raised, lower the tied value of the entity, or have it acknowledge in some way the fear may be irrational. The latter would be the most trivial as giving the AI rationality is beyond the scope of this post. This entire mechanic though is barely the tip of the iceberg with implementing emotions, though it does give us a base for the rest of the plan.
If we use the before-mentioned plan for persisting emotional values of entities, then we can use those values to determine and classify what type of emotion is warranted in a response. The difficulty in this is that each emotion is different, not only in aspect but within each person. One person may react to anger with sadness, while another may react with anger themselves. How do we go about creating that map of emotional reactions? We could use simple
if() else() statements, but those quickly become predictable and can’t be trained. If we’re going to allow this AI to learn from emotional responses, how do we allow those reactions to be adjustable? How would we be able to not only teach it to learn of emotions, but blend those emotions together and use multiple to convey a message or action? Using a neural network could be useful for learning the emotions and then multiple passes for determining the emotions that an action should return. We can follow that up with balancing polar emotions with each other then blending the result.
So how do we get the AI to display those emotions in a response? In my opinion, this may be one of the easier answers, assuming that the system already has an NLP engine that can form it’s own basic sentences. While one sentence might not be enough to facilitate the emotions the AI determined, it’s a good start. By referring back to the emotional values of the tokens it has stored, it can create almost it’s own MadLib and then fill in the blanks. This then only requires the AI to plug in the entity referenced, modifying the output sentence around it to display the emotion the AI holds towards the entity. We’re now turning numbers into words.
At the end of it all, it may not be true emotions, but it can definitely replicate them. Perhaps by using this path, they may be able to eventually define their own emotions.