Skip to main content

https://defradigital.blog.gov.uk/2019/12/16/content-design-and-artificial-intelligence-friends-or-enemies/

Content design and artificial intelligence: friends or enemies?

Posted by: , Posted on: - Categories: Defra content design
Robot holding a pencil above its head
(Photo above by delgrosso on Flickr. Used under Creative Commons.)

You’ve probably heard the scare stories about automation cutting swathes through traditional employment. Artificial intelligence (AI) can now even write news reports – so could content designers who create clear, easy to use information on GOV.UK also find themselves redundant?

Turns out that AI can’t replace the human element of content design just yet. But it could transform the way we create human-centred content at Defra and other government departments in fascinating ways.

Why algorithms don’t get party invitations

Machine learning is a kind of artificial intelligence that makes decisions based on data. It’s proved a great tool to help a computer beat a chess grandmaster.

Yet algorithms still aren’t as good as many human-created IT interactions. GOV.UK recently introduced a ‘related content’ algorithm to help users navigate the site. It’s early days, but the team behind it say: “We decided that we would not replace the hand-curated links in those 2,000 pages that had them, as despite how good our algorithms are at the moment, they still do not have the same context as a subject matter expert.” The algorithm lacks the understanding of context that allows a human to pick the most useful links.

Using similar logic, Facebook announced in August it would no longer rely exclusively on algorithms to surface news stories for users. Journalists will help curate the News Tab feature to choose news stories relevant to users.

Robotic writing

Such limitations have become apparent outside the world of government, too. You’ve probably read journalism generated by AI, even if you didn’t know it. You might have read an automated report on the weather, finances or election votes.

A third of the content published by Bloomberg News uses an application called Cyborg, for example, which helps journalists publish thousands of reports on company earnings each quarter. It’s proved a good tool to process lots of data and present facts.

But you won’t have read any analysis of AI-written reports or AI-processed data. Interpretation of information remains a human endeavour.

As Lisa Gibbs, an Associated Press director, says, journalism involves “critical thinking [and] judgment – and that is where we want our journalists spending their energy”.

Similarly, good content design takes into account experience and emotions, understanding that people don’t just “process information”.

Dangerous data

Moreover, artificial intelligence can get data dangerously wrong. Microsoft’s chatbot Tay (“thinking about you”) was launched in 2016. It lasted 16 hours and 96,000 tweets before Microsoft withdrew it because Tay quickly learned to use racist language and praise Hitler.

Tay learned its language from other users, some of them racist and some of them on a mischievous mission to show how data-driven AI can be easily manipulated.

AI is getting good at summarising text

Artificial intelligence is, however, improving at summarising information – a crucial technique of human content designers. A team at the Massachusetts Institute of Technology has developed a form of AI that can summarise complicated scientific papers. It just doesn’t appear to be very good at it yet.

“[A] recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing” is one example (quoted by Scientific Daily) of its editorial efforts, which still clearly needs a living, breathing editor to make it half-way useful.

As a starting point this AI summarising may seem promising but it also readily suggests AI’s limitations. AI could potentially summarise a piece of scientific legislation, but it couldn’t write a step-by-step guide on how to follow the law, and what happens if you don’t, in plain English.

Testing content with emotions and data

Similarly, when it comes to online services – applying for a fishing licence or checking your flood risk, to name some services run by Defra – machine learning can tell you where users are dropping out at certain points but not why they are or how to move them on.

You need real humans testing other real humans to find out where people are becoming frustrated or angry and to write help text that they can understand and act on. This is what content designers do.

AI could help voice search

There is one really exciting implication of AI for content design – voice recognition.

Machine learning could find out the most popular search terms that people use to find content. It could then modify content metadata so the summary appearing in a Google search matched those terms.

Sounds great, right? Not so fast. What if those mischief-makers who manipulated Tay to be racist bombarded voice search for a political candidate’s manifesto with racist terms?

AI can’t understand emotion in people’s voices. Until computers can compete with people for recognising emotion and feeling, content design will remain a human activity.

Content design needs emotional intelligence

There’s no doubt AI can help content design – for example, by keeping statistics or other huge datasets up to date. But until it gets a grasp of nuance, context and feeling, human content designers won’t be lining up for other jobs just yet.

Sharing and comments

Share this page