Meta to Label AI-Generated Media in Response to Deepfake Concerns

Published 5 months ago

Meta, the parent company of Facebook and Instagram, has announced plans to start labeling AI-generated media in May 2024. The move is part of an effort to reassure users and governments about the potential risks associated with deepfakes.

Manipulated Media: Freedom of Speech vs Disinformation

Previously, Meta had a policy of removing manipulated images and audio that didn’t break its rules. However, the company has decided to shift towards labeling and providing context for such content. This move aims to balance the need for freedom of speech with the growing threat of disinformation through deepfakes.

This decision comes in response to criticism from Meta’s independent oversight board. The board highlighted the urgent need for an overhaul of Meta’s approach to manipulated media, especially with the rise of convincing deepfakes powered by artificial intelligence.

The Threat of AI in Elections

Deepfake technology has sparked fears of widespread misuse, particularly during election years. Disinformation campaigns powered by AI could potentially mislead voters not only in the United States but across the globe.

Transparency Through Labeling

Meta’s new “Made with AI” labels will help identify content created or altered with AI, including video, audio, and images. A more prominent label will be used for content that could potentially mislead the public.

Monika Bickert, Meta’s Vice President of Content Policy, stated that providing transparency and additional context is now a better approach to handling such content. The labels will cover a wider range of content, not just the manipulated content that the Oversight Board recommended for labeling.

Cooperation Among Tech Giants

Meta, Google, and OpenAI had earlier agreed to cooperate on tackling manipulated content meant to deceive voters. They had decided to use a common watermarking standard to invisibly tag images produced by their AI applications.

However, some industry experts have expressed doubts about the effectiveness of this approach. Nicolas Gaudemet, AI Director at Onepoint, pointed out that there are bound to be gaps in this system. For instance, some open-source software does not use the type of watermarking adopted by the major AI players.

The Impact of Deepfakes

Several recent incidents have underscored the potential harm of deepfakes. One such incident involved a manipulated video of US President Joe Biden, which falsely portrayed him as acting inappropriately towards his granddaughter. Another incident involved a robocall impersonation of Biden urging voters not to cast ballots in the New Hampshire primary.

In Pakistan, the party of former prime minister Imran Khan used AI to generate speeches from their jailed leader. These examples highlight the urgent need for tech companies to address the risks posed by AI-generated media.

Meta’s new policy aims to provide a more balanced approach to tackling these risks, by allowing AI-manipulated content to remain on the platform unless it violates other rules, such as those prohibiting hate speech or voter interference. The removal of manipulated media based on the old policy will stop in July 2024.