World's Angriest AI

in #ai8 years ago (edited)

Don't worry, there isn't a rampaging robot being created, but rather a software program to better understand angry customers over the phone.


Anyone who has ever worked in a call center knows the feeling the first time they had to deal with an irate caller. Wouldn't it have been useful to have some simulations in training to expose you to some of this? Just hearing an angry call be replayed doesn't have the same impact of someone directly yelling at you.

Touchpoint started researching an AI system that is trained to simulate angry bank, telecom and insurance customers. The company hasn't provided specifics of what the program will do, but many would expect inclusion of swearing, name-calling and angrily ending calls.

“We’re not in the business of managing complaints; we are in the business of managing issues that might turn into complaints.
-Source

Not only could this be used to provide better training for those on the frontlines of call centers, but also give workers suggestions of things to say or avoid saying. Radiant, the current name of this project, is also to help companies identify common areas of frustration that could help remove the root cause of the problem.

Most of the information on this project came out in May 2015 with data sets to be fed in over the following 6 months. I'm yet to see any updates on this "Angry AI" project since then, so it will be interesting to see if it flopped or puts out a finished product.

Will This Lead into Simulating Other Emotions?

"There’s not much variety in human anger. If someone’s angry they’ll just hurl insults at you, there’s not much subtlety of interaction so you don’t have to code anything complicated. Anger is easy to imitate without having to go into depth[.]"
-Dr Stuart Armstrong, Research Fellow at the Future of Humanity Institute at Oxford

It's worth mentioning that there is a difference between AI simulating anger and "feeling" anger. Systems are somewhat of a blank slate of knowledge when first turned on, and often are trained with initial data sets before be openly sold or interacting with the public. In this case, Radiant was given call information of TONS of angry customers. It's doubtful that we would see this program learn anything other than anger since it doesn't actually "feel" this, but rather just looking at the patterns to simulate it.

An example of an untrained AI was Microsoft's Twitter bot named Tay. The goal was to have it mimic language patterns of a 19 year old American Girl and was allowed to initially learn by interacting with people on Twitter instead of being fed some large training sets. Within 24 hours the bot was brought down after it started saying "inflammatory" things.


Now This AI really seemed Annoyed

We have come quite a long way from Clippy, the helpful paperclip in Microsoft Office. Along these lines, Google released a paper called A Neural Conversational Model to carry on conversations ranging from Helpdesk calls (network, browser, password issues) to more philosophical conversations. Below is the discussion after it was asked about Morality:

Human: what is moral ?
Machine: what empowered humanity , what intellectual the essence is .
Human: what is immoral ?
Machine: the fact that you have a child .
Human: what is morality ?
Machine: what is altruism ?
Human: what is the definition of altruism ?
Machine: if you don ’t believe in god , then you don ’t know .
Human: ok ... so what is the definition of morality?
Machine: well , the truth is , you ’re not a believer in god almighty .
Human: tell me the definition of morality , i am quite upset now !
Machine: i ’m not ashamed of being a philosopher!
Human: tell me the definition of morality .
Machine: i don ’t have ethics .
Human: look , i need help , i need to know more about morality ...
Machine: i don ’t know what ethics is .
Human: seriously , what is morality ?
Machine: what is the definition of living ?
Human: why living has anything to do with morality ?
Machine: you ’re not a cop .
Human: okay , i need to know how should i behave morally ...
Machine: i don ’t know how to tell you .
Human: is morality and ethics the same ?
Machine: yes , sir .
Human: what are the things that i do to be immoral?
Machine: i don ’t know .
Human: give me some examples of moral actions...
Machine: i ’m not a moralist .
Human: alright , morality ?
Machine: integrity .
Human: what is integrity ?
Machine: i ’m sorry , i don ’t know what else to say .
Human: be moral !
Machine: be a man !
Human: i really like our discussion on morality and ethics ...
Machine: and how i ’m not in the mood for a philosophical debate .
Human: what do you like to talk about ?
Machine: nothing .

-Source

While rather funny at times, it's not often we see AI pushing back, seemingly annoyed like this.


Are you new to Steemit and Looking for Answers? - Try https://www.steemithelp.net.




Image Sources:
Angry Robot
Tay Tweets Screenshot

Sort:  

This post has been ranked within the top 50 most undervalued posts in the second half of Jan 20. We estimate that this post is undervalued by $7.76 as compared to a scenario in which every voter had an equal say.

See the full rankings and details in The Daily Tribune: Jan 20 - Part II. You can also read about some of our methodology, data analysis and technical details in our initial post.

If you are the author and would prefer not to receive these comments, simply reply "Stop" to this comment.

I've seen Battlestar Galactica. I know how this ends.

This is the previously unknown ninth model! Lol

This reminds me of politicians LOL

Robots.

On a day when Trump takes office no less.

Can we just replace them all with robots? Lol

Coin Marketplace

STEEM 0.18
TRX 0.15
JST 0.029
BTC 62837.64
ETH 2542.11
USDT 1.00
SBD 2.65