A little while back, editors at the Financial Times spotted a problem: the sources quoted in journalists’ articles were overwhelmingly men – as many as eight in 10. To help, the newspaper’s developers built a bot to automatically scan stories, hoping to spot unconscious bias and encourage writers to find more female voices.
That’s one way bots, be they chatbots or actual robots, are helping to make us better people, encouraging gender equality, helping us to quit smoking or lose weight and supporting mental health – even popping up to tweak our writing. Workplace chat tool Slack has a bot that checks for tone of voice in your messages, so you don’t accidentally send a bit of snark to a colleague.
The trick, says Professor Jon May, psychologist at the University of Plymouth, is for the bot to avoid nagging. “Focus on the positives,” he says. “Rewarding people when they’ve done something successful helps them think about how they might change.”
How such bots help us depends on the task at hand. An automated bot can run in the background to track patterns, such as the FT’s source scanner or Slack’s tone tool. Or they can be more reactive, such as a chatbot for finances or weight loss that you can tell in natural language what you spent or ate in a given day, in order to track behaviour and improve it with nudges. Public Health England built a Facebook bot to help people quit smoking by offering support for those suffering cravings. There are also plenty of chatbots, such as Woebot, that offer automated mental health support letting people ‘talk’ to someone without risking judgement.
May has taken the idea a step further: rather than testing the use of chatbots to keep students from procrastinating, he has actual robots sitting on their desks. The Nao is small humanoid bot that’s programmable, making it a favourite of researchers. “We have robots who take people through some of the interventions that would normally be delivered by therapists and counsellors to motivate them to change that behaviour,” he says. The Nao bots are “really cute, childlike things”, which help people open up. “They’re not embarrassed talking to it, that’s quite an advantage of using a robot in sensitive personal situations, so people can be honest with themselves.”
Bots have another power: they spot patterns that we wouldn’t naturally see. Like the FT’s source scanning tool, Textio scans job ads to check for unintentionally biased language, in the hopes of encouraging more women to apply for roles. “One role bots can play very well, as in the FT use case around gender bias in storytelling, is making visible that which otherwise might not be apparent and providing human beings with the means to take action based on this visibility,” says Dr Chris Brauer, Director of Innovation at Goldsmiths, University of London.
Such bots only work if we build them correctly. Amazon reportedly used artificial intelligence to filter through job applications, but biases built into the system meant it dumped women’s CVs into the refusal pile, as the algorithm was trained on previous hires — who were mostly male – so it learned to favour them.
While Amazon quickly dropped the AI CV scanner, we can learn from such mistakes. “When bots start to demonstrate bias it is actually often a really good thing, as it is typically building on and revealing historical biases in human beings and decision-making that have been around for a long time but have never been visible or have been intentionally obfuscated,” says Brauer. “We can redesign the system to account for and potentially overcome these traditional biases that have limited diversity or equality in the delivery of services.” By spotting those failures, we can make bots – and us – better.
Nicole Kobie writes about tech, transport and science and is contributing editor at Wired UK @njkobie
Image: iStock/Ben Sullivan