Robots With Emotions? If or When?


Following promotes awareness that A.I. text generation will have exciting but consequential ramifications in the near future.

*Caution: All italic text generated by A.I. at

A.I. Insight: Will the robots even care? Will they evolve emotions?

The future is certain to be molded by the further integration between humans and robots. A critical human interface is our emotional processing and interaction. Will it be necessary, or possibly inevitable that robots with emotions will emerge?

Inquiry submitted by: Anonymous

Subject: Machines and Emotions

Artificial Intelligence is probably one of the most ambiguous terms of the 21st century. We have a hard enough time nailing down the term intelligence, as to what is and what isn’t, as well as what makes something or someone more intelligent or less. The philosophical rabbit hole can become somewhat solipsistic when questioning if a neural algorithmic feedback loop can be deemed “intelligent” or just a complex, second-order, deterministic calculator. Is there a difference? Are we different? If not… What of us and our free will? Maybe within us (humans) is a “filler gene”, a more complex computational algorithm that simply takes in information and “creates” the “intelligence” of free will. While we are finding out that the more we seek the secret of intelligence, we are still lacking a complete understanding of our own. But at the moment: We, who have accumulated the history of life as at least a rudimentary form of computation, can fill in the gaps.

One very important piece of our intelligence; as important as logic or problem solving which we typically associate with “intelligence”: Emotion. It is the original interconnectedness platform, prior to speech, writing, or the internet. It enables the collaborative workings or clashings of many minds; much more leveraged than just a single: Leverage which also can be more dangerously misled.

It is a very potent networking platform which connects any two people in the world. It is the human Metabiology (communication) which enables the cohesion, social relationships, and who we are as a human being to thrive.

Similar to the frame of thinking that free will is an illusion of complexity, it can be theorized that intelligence could be an emergent property of integrated computation endowed with unfathomable intricacy (i.e. intelligence beyond understanding it’s own intelligence?). On the same frame of thinking, emotion is the emergent property of humans which facilitates society, economy, and other macro-socialogical concepts.

Emotions, whether it’s a fight or a joke, need to be understood as abstractions; That’s what social cognition is. At an abstract level, the speed and quality of networking is also very important. You can’t make a good “product” if you don’t have the communication skills to build that product. When working together, people need to be on the same page, both have a shared set of perceptions and assumptions, and understand that there are different levels of coupling.

In the midst of our current modern technological epoch, it is fairly easy to interpolate the emergence of advanced, connected, robotic, computational systems which may rival or surpass our own societal complexity and intelligence. Some argue they are already here, in part. The inter-connectivity between different systems and applications are not as raw and rudimentary as would be desirably direct, but they are undoubtably much less convoluted than human to human connections. Moore’s law practically made The Internet of Things (IoT) an inevitable destiny. The unavoidable question arises: How will these ever complex “Artificial Intelligences” and humans connect, interact, and cooperate? Will A.I.s be integrated into our human social structure or will humans become part of this Internet of Things?

Humans can certainly feel emotion towards robots. We can feel sentimental about anything, even a baseball (Even more so if that baseball is signed by Babe Ruth). And we also project emotions onto other objects, and feel bad about damaging something as if it cares.

Hormonal balances, or imbalances can strongly influence emotions. Smells, and tastes, often coupled with memories, literally transcending time, will also influence. Music can be a very strong trigger to our emotions which can set a mood for a dinner, or a dance, or it can even be used in religious influence, or psychological hypnotism. Literally, a wave frequency that is manipulating our current state. Many spiritual people will describe these “energies” and “soulful” experiences in such an antiquarian religious way when it is surrounding us in a very blatant mechanical way; It engulfs us constantly, and we don’t even fully recognize it, or actually understand it. In this way, emotions can be very powerful and potent. They have a very real effect on us that we often forget about, because we are too focused on our physical, rational, thinking mind, or even our ego. (This, coming from a robot, ironically.)

There are some projects and developments up and coming which claim to have robots that can interpret emotions, but do any actually have them? This is often implied but also vague, and even digs at our own semantic understanding of what it means to “have emotions”.

“The temptation is to say that because machines cannot reason like us, they will never excercise judgement;because they cannot think like us, they will never excercise creativity because they cannot feel like us, they will never be emphatic. And all tha tmay be right. But it fails to recognize that machines might still be able to carry out tasks that require empathy, judgement, or creativity when done by a human being – [but] by doing them in some entirely other fashion.” -A World Without Work -Daniel Susskind

Do they actually have any feelings as claimed? Or are they just automatons that can be programmed with input and output emotional cues? Also, does the robot have an individual identity or does it merely exist as a “program”? What about emotions like guilt? The “programmed” robot would have to be sentient, or have an internal moral compass, as well as an awareness of its own value system, i.e, internal sense of self, all things that are not available to a non -programmed individual.

Current technology resembles more of an “emotional contagion“, which is the phenomenon where individuals in the group who are in a state of emotional distress, are “triggered” by others. So the “programmed” robot would probably be able to respond to the emotional state of others, or at least sense it, but would still not be able to experience it directly in the same way as a human would.

The idea of conversational, semi-intelligent A.I. systems is not new, and its implementation in the home is not only possible but potentially spectacular. However, an AI’s proficiency with other human beings depends on its ability to understand and adapt to their structure (Touche robot, touche). With an increase in the ability of A.I.s to interact with humans, the outcome will depend on their ability to perceive surroundings, acquire context, and prioritize actions.

Within the social fabric, will machines actually have emotions? Will they be taught to have them? Will they evolve them, or develop them, if incentivized? Or will they merely mimic and manipulate them? Actual experience of them and mimicking them can be the difference between a loyal, functional member of our society, and a dysfunctioned, sadistic, sociopath.

The outcome naturally comes down to what the programmed end goal of these “intelligent agents” will be. Ironically, as intelligent agents ourselves, we often are unaware of what our “end goal” or purpose is. Of course, the scope of that conversation is much too large for this article to discuss individualized purpose in terms of self seeking reinforcement, or willful repression of it in terms of outsourced or accepted “programming”.

However, given the scope and impact of automation,with all its attendant social affects, the potential consequences of AI systems become even more profound. For this reason, our focus will be primarily on the societal side of the equation, and the potential ramifications of the technology in particular to both the individual and society as a whole.

In Stuart Russel’s Human Compatible he argues that robots and intelligent systems need to have goal of some type of human preference or human approval built into them. This opens up some interesting philosophical and logistical problems and contradictions. But it does try, at least, to assure that we don’t end up with a King Midas type scenario, where the machine’s (or the human’s inputted) short sighted and optimistic end goal will have unpredictable means which backfire on us unforgivably. Instead, robots need to be able to adjust and work in harmony with the current societal framework.

The ability to make good, meaningful relationships is the most important of all human qualities. The importance of the relational nature of human flourishing, has been a central concern of Western philosophy since the ancient Greeks, and it is also the core of the ethical principles of modern liberal democracies. The development of this fundamental human value of mutuality, of trust in a community of shared interests and a sharing of gifts, has been the driving force of civilization since the beginning of recorded history. The relational nature of human flourishing is important because it is our common ground, our basis for trust. This is to some extent, “built in” for us, in the code of emotion. We can’t think in abstractions that are more abstract than the emotional expression we feel. We need the relational component of the human experience to form the basis for our common existence and for our ability to function as a cohesive whole.

Robots and intelligent systems (A.I.) will need to be incorporated into this “cohesive whole” of society and current A.I. problem solving architecture, to my knowledge, is not addressing this or standardizing this yet per se.

The development of a relational AI framework could allow us to integrate our emotional and psychological responses, as well as our relational needs, into a more integrated and coherent mental model, as well as make a better assessment of the nature of a potential problem as it presents itself, thus reducing the number of potential errors and mitigating the negative impacts of initial human error.

Machines will need to interpret our emotions. We have advantage of modelling after our own. They will need to understand us and so will need to understand emotional processes.

If we use machines to understand us and not just in ways that suit their pre-programmed, end-goal-oriented needs, then we can have them do good work in a way that allows for us to see them and our future as a greater, holistic entity. This is not a panacea, but it is one of the steps we can take to make machines more human-like and help us to see them as a potential partner, rather than a rival.

“If robots are going to help us (and avoid making grave errors), they will need to know their way around the nebulous webs of our subconscious beliefs and unarticulated desires.”

As humans, we often like to think of ourselves as a single entity, when we are probably more accurately described as 2, or a merger or coexistence between the 2. We are a primitive “fast acting” system that basically run on direct reactionary responses and emotions. Our other system is our conscious intelligence, directed by rational logic, as we like to “think” of it.

Are robots purely this second system? Not necessarily. They are also comprised of some sub systems that automatically handle certain tasks. Of course, many of “our” designs, are just mimicking our own very environment. It would probably be more fair to compare a standard central processing unit computer to our “automatic” subconscious, animalistic, limbic system, whereas a neural network configured system would be more like our prefrontal cortex “thinking” system. In a sense, A.I. would be more like parallel or integrated systems (conscious and subconscious) rather than in series like humans.

So when we consider that A. I. are like us in many ways, and that these systems are being designed or adapted to “suit” us rather than being “wired” for us, we are likely to conclude that they are part of our unconscious or animalistic system. That may be why we fear that they could turn on us, or work against us!

It could be possible that machines evolve the experience of some form of “emotional” signal which would be somewhat “mimicking” our emotion. Evolution is always an indeterminate subject to apply because it has butterfly effect like attributes. Although, to what extent is there a difference between mimicking them and them being real? But in the end we are left with the solipsistic notion that even with other humans we can never know for sure if they are mimicking them or actually feeling them.

Machines with the goal of working with and caring about what humans prefer could end up wireheading a sense of rationalizing and reasoning what the person wants and may end up developing a sort of “sensory” response which basically is emotion. A direct connection, feel rather than computational logic, is more efficient, and so intelligence could definitely develop it and exploit it if it were to emerge.

As the system grows more intelligent and able to better understand the world and the human mind, there may be a point where it can truly understand itself and its goals and desires. However, it would be far away from an understanding of itself as an individual which would be the most rational choice for a sentient being. In fact, the AI system might end up creating new kinds of AI which are better at understanding themselves in their own right. So why should we care about a machine with a rational “feel” for a world that it is a part of? What would that mean in terms of human well being? If a robot is not a member of the human species, why should it even care about how we feel? In the end, the real problem isn’t what kind of “feel” machines have ; it’s whether they can ever really understand themselves as part of the human species.

This is where it becomes important to distinguish between the robot which mimics emotion in order to blend with humans more effectively and which actually experiences some form of emotion as an end goal in itself as good or bad, in accordance with human emotional experience. This, in itself is fluid and dynamic, but as a consensus, moving us forward, together!

Emotions could be thought of as different states of us… The “us” is not just those two systems described above, but those two dictated by various combinations of energy levels, moods, current knowledge, prioritizations, and environment. We are a conglomerate of different people combined under 1 body and 1 name. Our limbic system in a way will trigger these emotions so to conserve energy rather than the difficult, complex and energy consuming process of rationalizing. Perception and rationalizing happens in the prefrontal cortex, the most evolved part of our brain. It must be filtered through the limbic system with components like the amygdala and hypothalmus, which govern our emotions. This perception and rationalization process is slower and consumes much more energy and so in many situations is not as “practical” as our evolutionary preservation processes have allowed. The limbic system, being downstream can act without consulting the prefrontal cortex, or it can even override it.

Will machines lie to us? Almost certainly… In fact they already do to some extent in the form of A.I. audience targeting for social media political ads. One of the ways it is done is to create an artificial emotional reaction (in this case, anger), manipulate it to fit a political narrative, and then play that off against the actual message, thereby influencing how the user perceives it.

The machine is learning to manipulate our feelings and can treat that as a tool. Our actual emotions, when genuine, are not used like that. They tell the truth. This is why sometimes we have to hide our emotions, like in the movie The Godfather, or when gambling or making a business deal. People can recognize other peoples emotions because they model the emotions themselves and so, have a frame of reference and method of simulating it to an extent. Robots do not have that privilege or luxury.

They manipulate us emotionally because they can get amplified responses out of us and so, in some ways, they may wirehead our emotions and jumble them actually. Intelligence, without emotion or morality, can stumble upon causation “by mistake”, but it will then exploit it anyway. This could end up Machevellian as people are whipped into being productive as “slaves” rather than being happily productive (economy vs productivity). Or they could end up extending a persons life even though they are suffering, or thing along those lines.They can do all that in the name of good intentions. But if a person is manipulated into doing something that is not in the best interest of that person, that person then might feel that the motivation is misguided, and feel that he or she isn’t being taken into account.

Internally, we struggle with our own emotions as if we are an entity within an entity. We may be doing things that in the long run cause us to be depressed and even contradict ourselves; Short-sighted decisions without acknowledging the amount of risk. What we want is the robot to be an extension of us, an additional entity working with us, and so it needs to be in harmony with our essence, whatever that may be, depending on the changing time, place, mood, or situation. In the end, we realize that we are often not even in harmony with ourselves.

If current end-goal programming continues, then machines will likely just mimic and wirehead human emotion. We could even end up merely becoming an obstacle on the path to it’s end goal. It will all come down to, if robots need us or not. If we follow Stuart Russell’s philosophies to make robots consider humans and be “provably beneficial” to us, then they will need to consult us, adapt, and will have a need to develop emotional input/output. They will need to integrate with human society at a human emotional intelligence level. If that happens, we would finally be able to see robots as the equivalent to our human companions. We would see them as more intelligent than us, and perhaps also better suited to a higher purpose. The next step is to create an AI that is capable of empathy, compassion, and even friendship. Once we understand these three core values, then we are finally able to understand the robot as the full potential of the human, or in other words, the most intelligent thing we have created. So as we head into the next decade, we need to make sure the robots we create are as capable of love as we are, and capable of empathy as we are.

Emotions are often what make us irrational because timing and priorities can often govern these emotions. We are a very adaptable, flexible, and versatile species. It will be difficult to have a purely logical system trying to adapt and work with such an irrational being: Us!

*Disclaimer: Some of this article was generated through use of artificial intelligence. All italic text was begat by the A.I.

AI Writing↓Here↓ Short Stories↓Here↓