Technology firms promise to safeguard 2024 elections from AI’s influence
Coalition of 20 prominent technology firms vowed to implement measures that would increase the difficulty for malicious actors to exploit AI to sway election outcomes
A group of leading technology firms pledged Friday to restrict the malicious utilization of deepfakes and other artificial intelligence to manipulate or deceive voters in democratic elections.
The AI elections agreement, unveiled at the Munich Security Conference, sets out a range of commitments to make it more challenging for malicious actors to exploit generative AI, advanced language models and other AI tools to mislead voters in the lead-up to a busy election season worldwide in the upcoming year.
Endorsed by 20 prominent companies, the pact includes a lineup of major technology corporations such as OpenAI, Microsoft, Amazon, Meta, TikTok and the social media platform X.
It also encompasses significant yet lesser-known entities in the AI sector, like StabilityAI and ElevenLabs, whose technology has previously been implicated in generating AI-driven content for influencing voters in New Hampshire.
Additionally, signatories include Adobe and TruePic, firms engaged in developing detection and watermarking technologies.
This accord commits these companies to back the creation of tools for enhanced detection, verification, or labeling of synthetically produced or manipulated media.
They have also committed to conducting thorough evaluations of AI models to understand better how they could be exploited to disrupt elections and to design improved mechanisms for monitoring the dissemination of viral AI-generated content on social media platforms.
The signatories have pledged to label AI-generated media whenever possible while acknowledging legitimate uses such as satire.
This agreement represents the most extensive initiative to date by global tech companies to tackle the potential misuse of AI in electoral manipulation, following a series of incidents involving deep fakes in influence operations. Senator Mark Warner, D-Va., lauded the agreement as a positive step forward from previous elections where social media companies were oblivious to foreign entities’ abuse of their platforms.
As the prevalence of AI tools increases, policymakers are urging companies to expedite the integration of measures to identify AI-generated content. The agreement encourages companies to incorporate “provenance signals” to trace the origin of content, where possible.
One such standard is developed by the Coalition for Content Provenance and Authenticity, a consortium aiming to establish new open technical standards for digital media. These standards rely on Secure Controlled Capture to create a trackable history for each media piece at the file level.
Despite these efforts, the agreement’s content provenance provisions have notable limitations, as companies are not mandated to implement such standards.
Experts caution malicious actors may utilize open-source models that bypass content provenance tools. Detection technologies can only provide a probability estimate of media manipulation, and bad actors can exploit similar machine-learning technologies to create more convincing forgeries.
While endeavors to mitigate election-related AI risks are still in their early stages, advancements in the field are progressing rapidly.
Recently, OpenAI unveiled a new model named Sora for generating high-quality videos.
Though not publicly available, this development raises questions about the societal benefits of such tools amid uncertainties about ensuring secure elections in the face of advancing AI capabilities.
Tech companies face criticism for their efforts to combat technology-enabled disinformation, with some critics framing these actions as attempts to stifle political speech.
Microsoft President Brad Smith emphasized the distinction between free speech rights and the misuse of AI to propagate falsified political content.
Source: Newsroom