Eye on AI: AI-Generated Misinformation

2 min read

Fake news websites have appeared online that use chatbots to post hundreds of articles every day. The aim of many of these sites is to become a "content farm" – meaning low quality websites pumping out AI-generated articles to get clicks and generate ad revenue. The content of the articles, however, is worrisome. For example, how should stories prepared by generative AI platforms that are clearly false (one headline read: ‘Biden dead. Harris acting President’) be addressed? The spread of misinformation is a valid concern. Until the advent of generative AI, the law has found a balance between the right to speech and the manner and content of public discussion. However, as the use of generative AI systems accelerates and deepfakes proliferate, will the countries preparing to regulate AI apply new approaches to deal with such fake news?

Perhaps not surprisingly, the best way to combat AI-generated fake news may be to fight fire with fire. AI systems are being developed to detect fake news. Some have been deployed, and more are expected to be released to the public soon. In the absence of regulation, this may be the best defense to false or misleading AI-generated articles. Such systems can analyze each detail of an article to detect whether it was written by an AI program, including the headline, publication name, author name and other points that could indicate false content. However, one can readily discern that an "arms race" could take place, with generative AI systems designed to prepare ever more convincing fake articles.

On the other end of the spectrum, China’s recently promulgated AI regulations require that "content generated through the use of generative AI shall be true and accurate" and shall not contain "false information". While at first glance this approach may solve the problem of fake news in China, enforcement may be difficult in other jurisdictions should they choose to take a similar approach.

The EU, US and Japan are also preparing legislation to regulate AI. Their solutions will likely seek a balance between competing objectives --yet it will be a tall task to find the right middle ground between leaving it to private entities to provide AI detection programs and potentially curtailing free speech rights.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2023 White & Case LLP