March 28, 2024
Spread the love

Gmail’s Smart Compose is one of Google’s best and most interesting AI features which could predict what users will write in emails and offering to finish their sentences for them. But like many AI products, it’s only as smart as the data it’s trained on, and prone to making mistakes. That’s why Google has blocked Smart Compose from suggesting gender-based pronouns like “him” and “her” in emails  Google is worried it’ll guess the wrong gender.

Reuters reports that this limitation was introduced after a research scientist at the company discovered the problem in January this year. The researcher was typing “I am meeting an investor next week” in a message when Gmail suggested a follow-up question, “Do you want to meet him,” misgendering the investor.

Gmail product manager Paul Lambert told Reuters that his team tried to fix this problem in a number of ways but none were reliable enough. In the end, says Lambert, the easiest solution was simply to remove these types of replies all together, a change that Google says affects fewer than one percent of Smart Compose predictions. Lambert told Reuters that it pays to be cautious in cases like these as gender is a big, big thing to get wrong.

This little bug is a good example of how software built using machine learning can reflect and reinforce societal biases. Like many AI systems, Smart Compose learns by studying past data, combing through old emails to find what words and phrases it should suggest.

In Lambert’s example, it seems Smart Compose had learned from past data that investors were more likely to be male than female, so wrongly predicted that this one was too.

It’s a relatively small gaffe, but indicative of a much larger problem. If we trust predictions made by algorithms trained using past data, then we’re likely to repeat the mistakes of the past. Guessing the wrong gender in an email doesn’t have huge consequences, but what about AI systems making decisions in domains like healthcare, employment, and the courts? Only last month it was reported that Amazon had to scrap an internal recruiting tool trained using machine learning because it was biased against female candidates. AI bias could cost you your job, or worse.

For Google this issue is potentially huge. The company is integrating algorithmic judgments into more of its products and sells machine learning tools around the world. If one of its most visible AI features is making such trivial mistakes, why should consumers trust the company’s other services?

The company has obviously seen these issues coming. In a help page for Smart Compose it warns users that the AI models it uses “can also reflect human cognitive biases. Being aware of this is a good start, and the conversation around how to handle it is ongoing.” In this case, though, the company hasn’t fixed much — just removed the opportunity for the system to make a mistake.

Leave a Reply

Your email address will not be published. Required fields are marked *