Generative AI in journalism: Here are 6 principles that address diversity and inclusion (it’s just good journalism)
Artificial intelligence is known to suffer from deep-seated issues when it comes to diversity: machine learning algorithms are trained on historical data that can embed institutional discrimination; NLP and generative AI suffer from the same problems; and the industry itself has a diversity challenge.
It’s surprising, then, that the discussion emerging in the industry around generative AI has so far failed to engage with these issues: UK regulator Ofcom’s news release on it earlier this month doesn’t mention it; nor does the US Radio Television Digital News Association’s new guidelines. BuzzFeed’s lessons from, and discussion of its use of the technology doesn’t touch on it; CNET, despite being burned by the tech, doesn’t mention bias in its policy.
Wired does mention bias in its statement about use of generative AI, but only in reference to generating content — not in the sections on generating ideas or research. The closest I’ve seen an organisation come is The Guardian’s newly released principles’ mention of “guarding against bias” in broad terms. But look at the main concerns about generative AI identified in an industry survey by WAN-IFRA, and you won’t see bias getting a mention.
And the word diversity? You won’t find it in any of these documents.
This is why my Birmingham City University colleagues Diane Kemp and Marcus Ryder and I sat down two weeks ago to draft some core principles to kickstart a vital discussion around the issue.
Six Principles for Responsible Journalistic use of Generative AI and Diversity and Inclusion
The Six Principles for Responsible Journalistic use of Generative AI and Diversity and Inclusion aim to provide a starting point for discussing how we can ensure that new practices and workflows involving generative AI don’t repeat the mistakes of history, as well as ensuring that journalists have some practical tips for how to begin experimenting with generative tools.
The principles are:
- Be aware of built-in bias: it is easy to forget that generative AI is inherently biased, like any source. We need to make sure that journalists start with this, which leads to…
- Build diversity into your prompts: good prompt-writing is essential to effective use of AI, and just as good journalists know how to ask good questions, they should also know how to write good prompts — and that includes ensuring that diversity is actively prompted
- Recognise the importance of source material and referencing: this is another familiar journalistic skill: we should always know where information is coming from, and ensure that those sources not just represent the loudest voices. For example, when prompted “Who are the twenty most important actors of the 20th Century?” ChatGPT did not name a single actor of colour; when asked about Winston Churchill or the American founding fathers it failed to include important critical facts. We would push back on these answers from human beings, so it’s important to do the same with generative AI.
- Report mistakes and biases: AI is still learning — it’s important to understand how your use feeds back into that.
- GAI-generated text should be viewed with journalistic scepticism: again, we’re just emphasising a core journalistic trait here (I’d argue that generative AI’s ability to generate text will turn out to be one of its less useful applications)
- Be transparent where appropriate: the range of applications of AI is so wide that it’s difficult to say you should always be transparent, but it should certainly be a consideration.
These principles are starting points. We’d welcome feedback, examples, tips and any other thoughts as we continue to develop these.
This post was originally published on the Online Journalism Blog.