Depending on who you ask, generative AI will transform media for the better – bringing gains in efficiency, mitigating human-created errors, and widening access to information – or lower the standards for reporting and commentary while increasing the proliferation of fake news.
But how often are the public actually encountering AI-written articles? And if the first draft of history is written by a chatbot, how much do people actually care?
New YouGov Surveys data shows that a quarter of the public say they have read a news article that they believe to have been at least partially written by AI (26%). A fifth say they are not sure (20%), but over half are not sure either way (54%).
The proportion who believe they have read an AI-written article increases a little among the youngest group of Britons aged 18-34 (26%) and declines among the oldest group of over-55s (18%). When asked how frequently they have read articles that were partially written with the help of an LLM, 28% say they have done so always or often, 46% say they do so sometimes, and just 18% say they do so rarely or never.
In any case, whether they have read AI-written articles or not, the public are not overly keen on them. Some three quarters of Britons say they would be less likely to read an article partially written with LLM-generated content (72% vs. 3%).
This is still the majority opinion among those Britons who have read content generated by AI, although they are slightly more favourable towards it (73% vs. 9%).
Are Britons confident in their ability to identify AI-written articles?
Putting general sentiment to one side, it’s worth asking if the public can actually identify AI-written articles. Our data shows that a third (35%) believe they would be able to spot an article at least partially written by an LLM, with three in five (59%) saying they would not be confident in their ability to do so.
This increases among Britons who say they have at some point read an AI-generated article. Three in five are confident in their ability to spot this kind of content, with just two in five saying they are not (38%).
AI disclosure: Do Brits think journalists should disclose when they’ve used AI for writing news articles?
Finally, we asked the public whether they think journalists should disclose their use of AI in several areas relevant to the profession. In almost every instance, the public are in favour of transparency.
Britons are most likely to say that use of AI should be disclosed when it comes to publishing articles (91%) or opinion pieces (91%) that have been written without the involvement of a human editor. Involving a human editor changes the statistics, but not the overall outcome: the public still prefer disclosing the use of AI for an opinion piece (82%) or a news piece (80%) that has had journalistic oversight. The same goes for visuals or graphics that have been created with artificial intelligence platforms (79%), and for rewriting articles that have had a human-written first draft.
A majority also think use of AI should be disclosed for print or online media that produces article summaries (71%), or when LLMs have been used to generate ideas for articles (60%). The only instance where a majority are not in favour of disclosure is when it comes to proofreading articles for errors, and even in this case, a plurality support a more transparent approach (49%).
Methodology
YouGov polled 2,014 British adults online on 19-20 February 2026. The survey was carried out through YouGov Surveys: Self-serve. Data is weighted by age, gender, education level, region, and social grade. The margin of error is 2% for the overall sample. Learn more about YouGov Surveys: Self-serve.
Image: Getty
