Can I trust chatbots?

Their ubiquity means it’s increasingly important to trust chatbots, even though many people are unsure how these automated programs work

Monday, 13 December, 2021

You’ll undoubtedly have come across chatbots by now, even if you didn’t recognise them as such.

The use of these automated response systems has become ubiquitous across the internet, opening a new frontline in customer service and support.

But what are these pieces of software? How do they work? And most importantly, can you trust chatbots with confidential or important information?

Bot and sold

Chatbots are standalone pieces of computer program code, often sold as generic packages to be installed onto a company’s website, app or any other online service.

When a natural language input is entered by a user, the bot interrogates a pre-populated database to determine the most appropriate response.

For instance, if you asked a chatbot “how do I close my account”, it should correctly identify the words “close” and “account” as being key.

It would consult its database of pre-programmed responses, selecting the most appropriate answer and displaying this on-screen in response.

These programs use a concept called machine learning to improve both the accuracy of their responses and the way in which answers are delivered.

Ongoing feedback is constantly being generated; when a bot is closed mid-conversation, it implies an unsatisfactory answer may have been provided.

This is fed back to the algorithm that determines which responses to use in future, attempting to refine the process.

Why has this become so popular?

Other than the cost of buying a tool/licence, and the time required to personalise certain answers, chatbots are highly cost-effective.

They can field commonly asked questions without requiring human input, operating at any time of day or night across multiple languages (with the right coding).

A bot can tackle issues as simple or as complex as the administrator wishes, in real time and with no discernible delay.

There are no waiting times or hold muzak to further rile an already aggrieved customer, and the bot’s language can be softened to the point of sycophancy if desired.

Eventually, any chatbot will reach the limits of its abilities, at which point it’ll usually redirect the conversation onto a real person.

And this brings us onto the crux of the issue – can you trust chatbots in the same way you can trust a real person, even if they’re operating across the same infrastructure?

Trust me, I’m a chatbot

Research has consistently demonstrated that under-35s are the most open to discussing matters with a chatbot, with older generations less willing to be cooperative.

In general, however, technology is racing ahead of consumer willingness to embrace it.

This is particularly true in terms of confidential data, with widespread concerns about data mishandling and reselling potentially exacerbated by providing a computer with this info.

There’s also a lack of accountability. If Chris in tech support forgets to send a customer their product code, it’s his fault. If Chrisbot the avatar forgets, who does the customer complain to?

There’s no evidence from the consumer side of this conversation having taken place, even though a record should have been automatically stored on the company’s servers.

The subtleties of human emotion and expression are lost in text conversations, and nuance is often crucial to interactions with companies.

As such, people tend to use chatbots in low-risk scenarios such as ordering a takeaway, rather than in situations where personal data is involved.

They’d feel deeply resentful discussing medical results or requesting a mortgage holiday from a bot, where traits like empathy are absent.

The bot-tom line

User data ends up on the same databases, regardless of whether it’s provided to a customer services rep or an automated bot. The bot may even be more efficient at processing data.

We can trust chatbots – the bigger issue is that we don’t really want to.

Companies often miss this point as they attempt to humanise their bots with names and photos, representing these automated algorithms as something they’re not.

Research consistently demonstrates better perceptions of bots when they’re not pretending to be people, though innate prejudice means we’re still unlikely to trust them with anything more than basic requests or information.

Neil Cumins author picture

By:

Neil is our resident tech expert. He's written guides on loads of broadband head-scratchers and is determined to solve all your technology problems!