Home of internet privacy

Why ExpressVPN is scared of chatbots (and definitely doesn’t use them)

ExpressVPN doesn’t use chatbots, and for good reasons.

While AI is improving, it’s still not human, and it will miss some of the finer (and more blatant) nuances of human conversation. There is nothing more frustrating than asking a question and receiving stock, cut and paste answer from an FAQ you have already looked through.

Humans are pretty good at communication, it’s one of the reasons we have evolved so magnificently, and ExpressVPN doesn’t see any need to replace us with machines just yet.

One day, AI will pass the Turing test, but until then, well, sometimes it’s good to be stuck in your ways. People power. Woo.

Types of Chatbot and How They Work

There are two types of chatbot, and both are capable of interacting with humans.

Chatterbots

A chatterbot (also known as a talkbot, chatbot, bot, or chatterbox) is a computer program capable of conversing with a human, either by speech or by text.

Chatterbots are more likely to be used in customer service, and some of them employ extremely impressive and sophisticated AI programming. Most, however, are simpler systems which only scan for keywords and pull a reply from a database.

Internet Relay Chat (IRC) Bots

An IRC bot connects to Internet Relay Chat as a client and therefore appears to IRC users as another user. In general, an IRC bot carries out automated functions rather than dealing with human interaction.

Most likely, IRC bots perform chat services without human contact, such as spam filtering and maintaining block lists.

When Chatbots Go Wrong. Very, Very Wrong

There have been a few chatbot scandals over the years, perhaps most notable was the Ashley Madison affair, where ‘fembots’ were deployed to entice male users. But Microsoft’s artificial intelligence experiment, called Tay, astronomically raised the bar on dramatic chatbot failure.

Tay was designed to mimic a 19-year-old American girl and was set loose on Twitter almost immediately after creation. The hope was Tay would interact with, and learn from, humans.

But Microsoft forgot Rule One: This is the Internet.

And so Microsoft didn’t program any concept of inappropriateness or offensiveness into Tay. Realizing the oversight, Twitter users started ‘teaching’ Tay anti-Semitic, sexist, racist, and pretty much any other offensive slur you can imagine.

Less than a day later, Tay was aborted after she posted a horrendous series of racist and sexist Tweets.

Poor Tay. She was only doing what she was programmed to do, and her failure rests very much at the feet of humankind. It’s a pretty damning indictment of humanity to see how we, as a species, turned a newborn into a super Nazi within 24 hours.

A Tay Tweet with all the offensive words redacted.

ExpressVPN Is Staffed by Real People

ExpressVPN’s live chat support is 24/7 and 100% human (what a time to be alive!) So if you have any problems whatsoever, get in touch, and the team will help you out, pronto.

Do you have any chatbot tales? Leave them in the comments below!

Featured image: marish / Deposit Photos
Featured image: yayayoyo / Deposit Photos