Proliferating 'news' sites spew AI-generated fake stories
A sensational story about the Israeli prime minister's "psychiatrist" exploded online, but it was AI-generated, originating on one of hundreds of websites researchers warn are churning out tech-enabled fiction masquerading as news.
Propaganda-spewing websites have typically relied on armies of writers, but generative artificial intelligence tools now offer a significantly cheaper and faster way to fabricate content that is often hard to decipher from authentic information.
Hundreds of AI-powered sites mimicking news outlets have cropped up in recent months, fueling an explosion of false narratives -- about everything from war to politicians –- that researchers say is stoking alarm in a year of high-stake elections around the world.
"Israeli Prime Minister's psychiatrist commits suicide," still tops the list of "popular articles" highlighted on Global Village Space, a Pakistani digital outlet, after it made an online splash in November with baseless claims about a suicide note blaming Netanyahu.
A "substantial portion" of the site's content, including this article, appears to be scraped from mainstream sources using AI tools, according to an analysis by NewsGuard, a US-based research organization that tracks misinformation.
After scanning the site for error messages specific to content produced by AI chatbots, NewsGuard said it found significant similarities between the yarn about Netanyahu's "psychiatrist" to a fictitious 2010 article on a satirical website.
PAY ATTENTION: Click “See First” under the “Following” tab to see YEN.com.gh News on your News Feed!
NewsGuard analyst McKenzie Sadeghi said when she prompted ChatGPT, from Microsoft-backed OpenAI, to rewrite the original article for a general news audience, the result was "very similar" to the article on Global Village Space.
"The exponential growth in AI-generated news and information sources is alarming because these sites can be perceived by the average user as legitimate, trustworthy sources of information," Sadeghi told AFP.
Pushing propaganda
The fabricated article, which came as Netanyahu presses war against Hamas militants in the Gaza Strip, ricocheted across social media platforms in multiple languages, including Arabic, Farsi and French.
A handful of sites published obituaries of the fictional "psychiatrist."
The falsehood also featured on a television show in Iran, Israel's arch-enemy, as its host directed viewers to read the full article on Global Village Space.
The website, which relabelled the Netanyahu article as "satire" after being called out, did not respond to AFP's request for comment.
NewsGuard has identified at least 739 AI-generated "news" sites spanning multiple languages that operate with little to no human oversight and come with generic names such as "Ireland Top News."
But even that list is probably "just the low-hanging fruit," said Darren Linvill, from Clemson University.
Linvill is among the university's disinformation experts who found several Russian-linked websites mimicking news and pushing Kremlin propaganda about the war in Ukraine ahead of the US presidential election in November.
They include DC Weekly, which NewsGuard said uses AI to rewrite articles from other sources without credit.
This site -- which appears to be owned by John Mark Dougan, a former US marine who fled to Russia -- has published a slew of false claims including that Ukrainian President Volodymyr Zelensky purchased two luxury yachts worth millions of dollars with American aid money.
Illustrating the power of AI-led misinformation to influence policy decisions, some US lawmakers echoed the false narrative amid a crucial debate about aid to Ukraine.
'Camouflage'
"Auto-generated misinformation is likely to be a major part of the 2024 elections," New York University professor Gary Marcus told AFP.
"Scammers are using (Generative) AI left, right and center."
The AI-generated content populating websites such as DC Weekly helps "to create a sort of camouflage" that lends more credibility to their false stories penned by humans, Linvill told AFP.
These websites underscore the potential of AI tools -- chatbots even more than photo generators and voice cloners -- to turbocharge misinformation while further eroding trust in traditional media, researchers say.
Their polarizing content, which could whip up turmoil and sway political beliefs, is meant to lure eyeballs and capture ad revenue.
The revenue model for many of these websites is programmatic advertising, which means that top brands may unintentionally end up supporting them, while it may be difficult for governments to clamp down for fear of breaching free speech protections, researchers say.
"I am particularly concerned about its use by for-profit companies," Linvill said.
"If we don't stop and pay attention, it's just going to further erode the line between reality and fiction that is already so blurry."
PAY ATTENTION: Stay informed and follow us on Google News!
Source: AFP