preloader

Should we be polite to Artificial Intelligence?

Do manners matter?

Siri is polite. She listens. She does what I ask her to do without qualms. And she will always be polite no matter if I ask her a thousand questions or command a thousand actions. She does not get tired, or frustrated, or angry. Even if you curse at her, she has manners and does not engage.

 

And I often catch  myself mirroring that politeness and saying “Thank you” to Siri even though she has already told me the weather and has stopped listening. Unconsciously, I observe normal social skills. The intelligent humanistic responses of AI, trigger my reflex reaction that I would have in any casual conversation. I think my mom would be proud that I am polite even to inanimate objects.

By reflex, we talk to inanimate objects all the time; yelling at the TV when your team scores a winning goal or when the referee makes a bad call or cursing at the car for stalling. But those interactions are externally provoked by an event and often to express displeasure. We do not normally initiate conversations with our coffee table unless it moved in the way and stubbed your toe. 

So do our manners change when we know there will be a response vs. when we expect no response? In interactions where we do not expect responses, manners are usually not observed. In fact, because we know there will be no response, we are often very impolite. For example;

  • Interacting with inanimate – My car knows about every curse word in the book when I can’t figure out what’s wrong. 

 

  • Interacting with humans with no expected response –  When screaming (windows up) at the driver who just cut you off in traffic.

 

  • Interacting with humans without face-to-face response – leaving a nasty comment on an online post. Sure, there could be comments back, but do we value the responses since they do not have the same repercussions as an in person response?

However,  in conversations where responses are expected, our social training usually keeps us mannered. The proverb of “you catch more flies with honey than with vinegar” represents our social conditioning that when we are polite we receive politeness in return.

The Social Reflex

Our manners towards AI are polite because it has the humanistic qualities of in-person human interaction with intelligent responses expected.

Sympathy for intelligent AI is instinctive. We imprint a personality to something that portrays human characteristics. Something as simple as having a human-like voice, body,  face, or emotional response creates that human empathy.

Take the video of the students abusing a human-shaped robot. The video went viral and made people very upset and even concerned about the robot. Turns out the video was fake.  But it’s a great demonstration of empathy towards inanimate representations of humanity.

Why do I feel so bad, man. I just wanna hug that robot!!” – Youtube Commenter

AI technology will continue to grow more and more advanced. Programmers, engineers, and data analysts are helping reinforce machine learning, NLU and cloud collected data to improve the complexity, logic, and understanding.

 

The technology will learn patterns of reading emotion in the user’s voice and responding with appropriate intonation. The improvement of today’s monotonous AI voices will continue to advance in tone, humanization, and personification.

 

As we become more comfortable with an emotional connection to these personas, should we and how do we design the technology to return empathy intelligently? See Angelica Lim’s perspective in her research on programming AI with empathy.

So should we be polite to Artificial Intelligence?

Would it reason that if we personify AI with human qualities, but design AI without empathy or humanistic emotional interaction, our socialization would change? Would we become colder and less compassionate even in our human-to-human interactions? If our time is spent making commands, “Text Rob”, “Turn on Kitchen lights” , “Open iHeartRadio”, are we able to separate this command language when interacting with each other?

 

Whereas if we design artificial intelligence that supports courtesy and empathy – maybe even requires it, could we improve socialization? Can we re-code human social reflex?

 

The politeness toward AI is not so AI likes us. They are coded to like us. They are coded to be subservient and to be polite. The human politeness is to the benefit of ourselves. If we personify AI as human but interact as masters, we lose the little humanity we have left.

 

I believe that no matter what technology lies ahead, we have the choice to stay human.