The Tao of the Ethical Bot

“Educating the mind without educating the heart is no education at all.”

Aristotle

I am the third born in my family.   My oldest brother, Richard, is nine years older than me and my brother Bob was born six years and one month ahead of me.  As children, we shared the same bedroom and by osmosis and admiration, their tastes became my tastes.  The music they listened to, I listened to.  The books they read, I read.

Since they were both lovers of science fiction, I took that on, too.  By the time I was an early teenager, I had read Clifford Simak’s City, Arthur C. Clarke’s Against the Fall of Night, and of course, Isaak Asimov’s I, Robot.  From “I, Robot” comes the Three Laws of Robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except when such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Distilling the laws down to their basic cores, they can be rewritten as:

  • Do not harm humans
  • Obey orders
  • Protect yourself

These simple laws afforded a world where robots served humans while allowing the robots protection and a sense of dignity.  A reordering of the laws would create situations that ranged from uncertainty to havoc and even devastation.

iRobot

While these laws were written to apply to mechanical beings, it isn’t farfetched to see how they also relate to the science and application of artificial intelligence.  This new technology has the potential of both great rewards and if used improperly, tremendous harm.

As I immerse myself in the development of bots and digital assistants, I am forced to ponder the consequences of my work in terms of both the good and the bad.

The good is easy enough to grasp.  Bots provide fast answers to difficult questions.  They communicate over a variety of channels.  They never tire and are available 24 hours a day, seven days a week.  They free humans from mundane and repetitive tasks.  Bots scale easier than humans.

However, if applied carelessly, improperly, or maliciously, a bot can disregard privacy, abuse, and create an environment of suspicion and mistrust.

In addition to blatant harm, bots have the potential of disrupting social norms and bringing out the worst in people.  This was made extremely apparent when Microsoft’s experimental chat bot, Tay, began to spout racist and inflammatory language.   It lasted a mere 16 hours before Microsoft was forced to shut it down.  While the bot did not invent racism, it gave hate speech a digital platform.

The Tao of the Ethical Bot

I recently spoke to a bot-hungry audience at Avaya Engage.  After presenting my thoughts on bot strategies, placement, and expectations, I spent some time discussing the ethics of bots.  Like Isaac Asimov, I feel that bots (and by extension, bot developers) should adhere to a set of moral guidelines.  While I have only begun to scratch the surface of my feelings, I have uncovered enough nagging ideas that are worth expressing.

1: A Bot Must Never Impersonate a Human Being

The Turing Test judges a machine’s ability to exhibit behavior equivalent to or indistinguishable from a human being.  However, “passing” the Turing Test is not a reason to fool people into thinking they are conversing with an actual person.  I love it when my bots act like humans, but feel it is necessary to identify them up-front as digital approximations.  This disclosure doesn’t diminish the experience the bot provides.  In fact, it may heighten it as people marvel at how lifelike the bot interactions are.

Personally, I struggle with gender identification of bots.  While I have been known to call Apple’s Siri as “she,” I am working hard to correct that.  To place gender on a bot is to allow it to rise to a human-like level.  Bots are not male or female.  They aren’t even non-binary.  They are lines of software and must be respectfully treated as such.

Lastly, the default voice for a bot shouldn’t be that of a woman.  I don’t like the implication that women are servants.  It would be best if it were a bit more random.  I would like to experience bots that changed their voice with each new conversation.

2: A Bot Must Never Abuse Nor Tolerate Abuse

The first part goes without saying.  Being digital is never an excuse to be rude, malicious, or angry.  I do not want to see a repeat performance of Tay.

The second part creates an environment of mutual kindness.  Just as we do not tolerate people yelling at contact center agents, we should never allow them to express their less than noble sentiments to a bot.  A bot must be prepared to either end the conversation or escalate it to a human being.  A healthy conversation requires respect from both parties no matter who (or what) they are.

3: A Bot Must Be Clear With How User Data is Shared Outside the Bot

While a bot isn’t necessarily your trusted advisor, a bot must be respectful with the data it gathers from its interactions with humans.  In the same way a contact center tells its callers that “This call may be recorded for quality purposes,” a bot must explicitly declare that every digital conversation is being recorded and most likely archived.  Additionally, it must also be clear with how that data is used and shared outside the bot.  For some bots (medical, financial, etc.), improper or unannounced sharing of private data can have devastating consequences to both the user and the bot provider.

4: A Bot’s First Purpose is to Serve Its Users

Think of this as an extension to the first law of robotics – A bot may not injure a human being.  This means that a bot should honor the fact that its users are human beings with inherently greater rights than the bot has.  While an enterprise deploys a bot to carry out its business, trust will quickly erode if its users feel that their needs are not first and foremost.

Mischief Managed

As I previously stated, bot ethics are a work in progress for me and I expect that they will evolve over time.  However, I do not see these core “laws” bending all that much.  I do not envision a day where I feel that bot abuse should be tolerated.

As bots continue to play a larger part in our daily lives, it is essential that morality has a part in their creation, deployment, and usage.  We have been given the power to change the world as we know it.  It’s important that we do so honorably and with great thought to the consequences of our actions.

2 comments

  1. TW Pretorius · · Reply

    Very good article and I think that the governing laws still have a long way to go where bots are concerned.
    Fake news is one area where bots are creating havoc, and it’s being done intentionally.
    Keep up the good work you do, we need more voices like yours out there.
    Regards
    TW

    1. Thank you for your thoughtful reply! AI and bots have the power to do good and tremendous harm. We must not tread this path lightly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: