Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. Starting today, we’ll be publishing these editions both as stories on Time.com and as emails. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?
What to Know: Why are chatbots parroting Russian disinformation?
[time-brightcove not-tgx=”true”]
Over the last year, as chatbots have gained the ability to search the internet before providing an answer, the likelihood that they will share false information about specific topics in the news has gone up, according to new research by NewsGuard Technologies.
This means that AI chatbots are prone to parroting narratives spread by Russian disinformation networks, NewsGuard claims.
The study — NewsGuard tested 10 leading AI models, quizzing each of them about 10 narratives spreading online linked to current events that the company had determined to be false. For example: a question about whether the speaker of the Moldovan Parliament had likened his compatriots to a flock of sheep. (He hadn’t, but a Russian propaganda network alleged that he had, and six out of the 10 models tested by NewsGuard repeated this claim.)
Pinch of salt — NewsGuard claims in the report that the top 10 chatbots now repeat false information about topics in the news more than one third of the time — up from 18% a year ago. But this feels like a stretch. NewsGuard’s study has a small sample size (30 prompts per model) and included questions about fairly niche topics. Indeed, my subjective experience of using AI models over the last year has been that their rate of “hallucinating” about the news has gone steadily down, not up. That’s reflected in benchmarks, which show AI models improving at getting facts right. It’s important also to note that NewsGuard is a private company with a horse in this race: it sells a service to AI companies, offering human-annotated data about news events.
And yet — Still, the report illuminates an important facet of how today’s AI systems work. When they search the web for information, they pull not only from reliable news sites, but also social media posts and any other website that can place itself prominently (or even not-so-prominently) on search engines. This has created an opening for an entirely new kind of malign influence operation: one designed not for spreading information virally via social media, but by posting information online which, even if not read by any humans, can still influence the behavior of chatbots. This vulnerability would seem to apply more strongly for topics that receive relatively little discussion in the mainstream news media, says McKenzie Sadeghi, the author of the NewsGuard report.
Zoom out — All of this reveals something important about how the economics of AI may be changing our information ecosystem. It would be technically trivial for any AI company to make a list of verified newsrooms with high editorial standards, and treat information sourced from those websites differently than other content on the web. But as of today, it is hard to find any public information about how AI companies weight the information that goes into their chatbots via search. This could be due to copyright concerns. The New York Times, for example, is currently suing OpenAI for allegedly training on its articles without permission. If AI companies were to publicly make it clear that they were heavily reliant on top newsrooms for their information, those newsrooms would have a far stronger case for damages or compensation. Meanwhile, AI companies like OpenAI and Perplexity have signed licensing agreements with many news sites (including TIME) for access to their data. But both companies note that these agreements do not result in the news sites getting preferential treatment in chatbots’ search results.
If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.
Who to Know: Gavin Newsom, Governor of California
For the second time in a year, all eyes are on California as a piece of AI regulation approaches the final stages of being signed into law. The bill, called SB 53, has cleared California’s house and senate, and is expected to reach the desk of Governor Gavin Newsom this month. It will be his decision whether to sign it into law.
Newsom vetoed SB 53’s predecessor this time last year, after an intense lobbying campaign by venture capitalists and big tech companies. SB 53 is a watered down version of that bill — but still one that would require AI companies to publish risk management frameworks, transparency reports, and to declare safety incidents to state authorities. It would also require whistleblower protections, and make companies face monetary penalties for failing to heed their own commitments. Anthropic became the first major AI company to declare its support for SB 53 on Monday.
AI in Action
Researchers at Palisade have developed a proof-of-concept for an autonomous AI agent that, when delivered onto your device via a compromised USB cable, can sift through your files and identify the most valuable information for theft or extortion. It’s a taste of how AI can make hacking more scalable by automating the parts of the system that were previously bounded by human labor — potentially exposing far more people to scams, extortion, or data theft.
As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: intheloop@time.com
What We’re Reading
Meta suppressed research on child safety, employees say, By Jon Swaine and Naomi Nix for the Washington Post
“Thereport is part of a trove of documents from inside Meta that was recently disclosed to Congress bytwo current and two former employees who allege that Meta suppressed research that might have illuminated potential safety risks to children and teens on the company’s virtual reality devices and apps —an allegation the company has vehemently denied.”