Dark
Light
Today: July 27, 2024
February 13, 2024
1 min read

Big Tech’s watermarking: a splash of welcome good news.




Article Summary

TLDR:

This article discusses how Big Tech companies are implementing measures to detect and label AI-generated content. Companies such as Meta, Google, and OpenAI are using visible markers, invisible watermarks, and metadata to identify the origins of the content. The development of an industry-wide standard called C2PA is also underway to add a “nutrition label” to images, video, and audio. While these measures are not foolproof, they aim to create barriers and slow down the creation and sharing of harmful content. Additionally, there are calls for nontechnical fixes and binding regulations to prevent deepfake-related issues.

Article Summary

Big Tech companies, including Meta, Google, and OpenAI, are taking steps to detect and label AI-generated content. Meta plans to label AI-generated images on its platforms by adding visible markers, invisible watermarks, and metadata to the image files. Google has joined the steering committee of C2PA, an open-source internet protocol that aims to add a “nutrition label” to content by encoding provenance information. OpenAI is adding watermarks to images generated with its AI models and including visible labels to indicate that the content is AI-generated.

Although these measures are not foolproof, they serve as a promising start to address the issue of AI-generated content. Watermarks in metadata can be circumvented through screenshots, and visual labels can be cropped or edited out. Invisible watermarks, such as Google’s SynthID, provide more robust protection as they are harder to tamper with. However, there are challenges in labeling and detecting AI-generated video, audio, and text.

The article emphasizes the importance of creating barriers and friction to slow down the creation and sharing of harmful AI-generated content. While determined individuals may still find ways to bypass these protections, implementing measures like watermarks and invisible markers can help deter casual users.

Furthermore, the author suggests nontechnical fixes and binding regulations as additional measures to prevent issues like deepfake porn. Major cloud service providers and app stores could ban services that facilitate nonconsensual deepfake content. Additionally, the inclusion of watermarks in all AI-generated content, even by smaller startups, is recommended.

The author expresses skepticism about voluntary guidelines and rules due to the lack of real accountability mechanisms for tech companies. Binding regulations, such as the EU’s AI Act and the Digital Services Act, along with renewed interest among US lawmakers, provide hope for addressing deepfake-related challenges. The US Federal Communications Commission has also taken steps to ban the use of AI in robocalls.

Overall, the article acknowledges that while voluntary measures have their limitations, the steps taken by Big Tech companies are a welcome improvement over the current status quo. The combination of technical measures, nontechnical fixes, and binding regulations could help mitigate the potential harm caused by AI-generated content.


Previous Story

Tech struts from screens to runway in New York City.

Next Story

ClearSign’s Burner Performance Tops Single and Multi Process-Burner Heaters – Official

Latest from Blog

Portugal: At the Forefront of Submarine Cable Innovation

TLDR: Portugal is at the forefront of submarine cable technology, showcasing innovation in global connectivity. The 2Africa and Google’s Nuvem submarine cable systems are set to revolutionize Portugal’s digital infrastructure. In an
Go toTop