Lego robot minifigure with instruction sitting near computer.
Grammarly’s new ‘tone detector’ feature operates much like a spellchecker, reading through text and then making suggestions. Photograph: Ekaterina Minaeva/Getty Images
Australian lifestyle

‘You sound worried’: would you let an AI change the tone of your emails?

A new ‘tone detector’ from Grammarly promises to save us from causing unintentional offence. How else might it impact our language?

true
Wed 6 Nov 2019 12.00 EST

On the first episode of the final season of HBO comedy series Silicon Valley, tech startup engineer Bertram Gilfoyle lets an AI version of himself take over his instant messaging duties. “Do you need the real me for this conversation?” he asks his colleague.

It may sound extreme, but the existence of spellcheckers predates the personal computer by a decade. Since 1992, grammar checking has also come as standard in word processors. For the better part of a generation, we’ve been OK with robots watching and correcting our language, occasional run-ins with Clippy aside.

Now predictive text software on our phones monitors how we write every day; and Gmail’s Smart Compose suggests one-click canned responses to every email.

Silicon Valley’s AI mimicked its character’s misanthropy, but grammar-correcting software Grammarly had a more polite vision in mind when it launched a new “tone detector” feature. They asked their users: “Have you ever pressed send on an email and realised it may have sounded a bit too aggressive?”

You install the artificial intelligence-powered plugin on to your web browser, where it scans your emails and grades them as “confident”, “optimistic”, “worried” or “sceptical” – it claims to identify 40 tones. Much like a spellchecker, it will read through your text and then make suggestions at the bottom of your screen, or when you hover your mouse over a grumpy red underline. It feels neat and familiar, but the software raises a broader question than the one it asks its users: to what standard are our interactions being steered?

Grammarly’s tone detector scans your emails and grades them as ‘confident’, ‘optimistic’, ‘worried’ or ‘sceptical’. Photograph: Grammarly

“Grammarly can tell you how your message is likely to sound to someone reading it,” co-founder and product manager Alex Shevchenko tells Guardian Australia. It will also provide suggestions to spice up vocabulary it deems ineffective or bland, suggesting, “thrilled” over “very happy”; and that “u” instead of “you” is inappropriately casual.

The phrase, “I think we should be able to solve this issue for you” will flash red, before it changes to “we can solve this issue for you” – it sounds more “confident” according to the handshake emoji at the bottom of the screen.

The tone detector, which works across American, British, Australian and Canadian English, was designed by a team of linguists and software engineers. It learns from its users, too, so if the majority votes a word fits one “tone”, then alternative forms of communication lose value and the most popular tone choice is normalised. But technology is not a magical orb that exists beyond our reality; every dataset, rule and machine-learning algorithm is guided by its human creators and users and the biases they carry.

“There is a dominant tone,” Deborah Raji, a robotics engineer and tech fellow at the AI Now Institute in New York, says of Grammarly’s detector. “Whatever they’re using to train their models … automatically kind of supersedes any other dialect or smaller group that speaks differently, and that dialect ends up being branded as the wrong thing.” These technologies therefore risk reinforcing the idea that less common methods of communication are less valid.

Grammarly does not collect gender or age information from its users and therefore doesn’t take these into account when providing feedback, which means that everyone’s words are edited equally.

Shevchenko says the software is more about giving agency and empowering effective communication. “We understand that the contexts in which people communicate involve biases and assumptions, and it’s our hope that the tone detector offers support in navigating such complexities.”

Grammarly’s tone detector confirming an email sounds confident and optimistic. Photograph: Grammarly

Other programs do take the identity of the sender into account. Non-AI Google Chrome plugin Just Not Sorry, created by New York-based software consultancy Def Method, was designed to help women erase unnecessary qualifiers from their emails. It highlights hedging words like “I just think” or “we could” to help users sound more assured.

Rachel Adams, an AI and international human rights law research specialist at the Human Sciences Research Council, explains that modifying phatic speech (the kind aimed at establishing a social relationship instead of discussing ideas – like asking, “how are you?”) tells users that the dominant, masculine-corporate speech is preferable. If they want to sound confident, they need to follow suit.

“AI needs to be very narrow in its focus but think very carefully and broadly about what the social impacts of its technologies are, and to try to develop technologies that are much more inclusive in the way that they operate. It should allow for multiple ways of expression,” Adams says. “It’s like saying, is there is one single truth that exists that we can all agree to? And there’s not.”

Grammarly and Just Not Sorry also can’t account for how the receiver reads the message. Some people speak cautiously because they have adapted to their immediate social expectations. Software might be able to tell you what’s normative, but it cannot tell that you’ve intentionally phrased an email delicately because it’s 12.30pm on a Thursday and you know your boss will be hangry when they read it. That requires social nous even the cleverest algorithm can’t process.

While a tone detector can be a helpful bulwark against a hastily worded message, or a useful tool for those who speak English as a second language, the outgoing benefits of these technologies are not equal for every user. As Raji explains, “It doesn’t account for these kinds of specific contexts, because it actually does matter who’s sending it. Not everyone can sound the same.”

Show more
Show more
Show more
Show more