AI use in the media: Cosmos project explores opportunities and risks 

The rapid rise of AI over the last couple of years has touched every aspect of our lives.  

There are clear opportunities for its use, including healthcare, scientific research and education.  

The risks are also far too apparent. Deepfakes and disinformation contributed to by AI dominate the headlines, but the potential impacts of AI use on society, government, employment and the environment are still being understood.   

Cosmos has a long history of communicating science to a wide audience. Our science-trained journalists report on the latest research and interview those behind the stories. Cosmos prides itself on fact-based reporting, including reporting on the research and use of AI.  

We have made the decision to cross the divide and become part of a science experiment to answer important questions about the use of AI in the media. Our aim is to report back on our experiences of using AI in our newsroom and provide evidence-based examples of the risks and opportunities for science communication.  

Our first learning, with the benefit of hindsight, is that we needed to more clearly communicate the project before testing and publishing trial articles. We could have done better in that respect. We have briefly paused the project while we review the feedback and queries received so far and remain committed to ensuring responsible and ethical AI practices. We value your feedback, positive and negative, as well as your questions

What is the Cosmos AI project?    

We have more than 16,000 stories in our archive by writers, including current and former staff, as well as a range of freelancers who were commissioned to write articles for Cosmos. This content has long been used by our journalists as part of their research for newer articles or explainer articles such as how could we travel through time. Our archive provides a solid source of fact for us to create more explainers.  

Explainers are important as they help our readers understand science and they have been popular on our website for many years. They are an educational tool for all ages.  They are fundamentally different to news stories and features.  

Our journalists have written explainers in the past but that takes them away from reporting on breaking news and interviewing researchers. We decided to test whether an AI system could help us create explainers to support the work of our journalists. The explainers need to be factually correct to promote fact over misinformation. 

The Cosmos AI system uses a Retrieval-Augmented Generation (RAG) model. At first it identifies articles in the archive relevant to the question or topic. This saves the journalist from having to cross-check the archive manually at the outset (but there is important human editorial involvement later on in the process – see below). RAG reduces the risk of “hallucinations” where the AI can make things up.  

The second step is the writing phase, which involves the use of a large language model (LLM). Open AI’s GPT-4 (GPT-4) was chosen after research into various platforms. GPT-4 assists in the writing of the explainers. Our story archive is not used to train GPT-4 or any other LLM. We do not use ChatGPT, the publicly available product owned by Open AI that many readers would be familiar with.    

The final step in our process is the most important. We use trained science communicators to fact check and edit the initial draft explainers that have been created with the assistance of the AI system. Nothing is published without at least two real people, including our journalists, reviewing, editing and finalising the explainers. We will monitor how long it takes to do that work and compare it to a journalist writing and fact-checking the explainer from scratch.   

Cosmos received a grant from the Walkley Foundation’s 2023 Meta News Fund to undertake this project. The project runs from March 2024 until February 2025. The Cosmos AI system was built by an external developer who has worked with Cosmos for many years. The system is not publicly available, but we will continue to share our project findings and experiences publicly.  

If we want to ask the question “what is the point of AI in the media” then we need to investigate how it can be used and interrogate the results.  As we cross this frontier, this must be underpinned by a steadfast commitment to journalistic practices that are ethical and responsible.  

Please login to favourite this article.