California Races Combat Deepfakes Election Kamala Harris AI-Created Video
California's Urgent Efforts to Combat Deepfakes Ahead of the Election
In the wake of a viral AI-generated video mimicking Vice President Kamala Harris, California lawmakers have accelerated their efforts to regulate deepfakes and protect the integrity of the upcoming election. This move comes as part of a broader initiative to oversee the artificial intelligence sector, safeguard workers, and establish comprehensive safety regulations for AI models.
Addressing Deepfakes in Elections
The recent incident involving a deepfake video of Kamala Harris has highlighted the urgent need for legislative action. Here are some key measures California lawmakers have approved to combat deepfakes:
- Prohibition of Election-Related Deepfakes: Legislation has been passed to prohibit the use of deepfakes in elections. This includes a mandate for major social media platforms to eliminate misleading content 120 days before an election and for 60 days afterward.
- Disclosure Requirements: Campaigns will be required to publicly disclose if they are using AI-altered materials in their advertisements.
- Criminalization of Child Exploitation: Two proposals aim to criminalize the use of AI tools to generate images and videos depicting child sexual exploitation, addressing a current legal loophole that prevents prosecution unless the material features a real person.
- AI Detection Tools: Technology companies and social media platforms will be required to offer users tools to detect AI-generated content.
Assembly Bill 2839: A Comprehensive Approach
Assembly Bill 2839 is a pivotal piece of legislation designed to tackle the spread of deepfakes during elections. Here are its key provisions:
- Prohibition of Deceptive Campaign Ads: The bill prohibits the distribution of deceptive campaign ads or "election communication" within 120 days of an election, aiming to protect candidates' reputations and electoral prospects.
- Court Orders and Damages: Candidates, election committees, or elections officials can seek court orders to remove deepfakes and sue for damages against those who distribute or republish deceptive material.
- Post-Election Content: The legislation also applies to deceptive media posted 60 days after the election, including content that falsely portrays voting machines, ballots, or other election-related properties.
- Exceptions for Satire and Parody: The bill includes exceptions for satire and parody that are clearly labeled as such, as well as for broadcast stations that inform viewers about the accuracy of the content.
Regulatory Challenges and Industry Opposition
The passage of these bills has not been without opposition. Tech industry groups, such as NetChoice, argue that these laws could result in the chilling and blocking of constitutionally protected free speech.
- Industry Policies: Online platforms have varying policies regarding manipulated media and political ads. For instance, TikTok does not allow political ads and may remove labeled AI-generated content if it depicts public figures, while Truth Social does not address manipulated media in its rules.
- Federal and State Actions: Federal regulators, such as the Federal Communications Commission, have also taken steps to crack down on AI-generated content. For example, a proposed $6-million fine against a Democratic political consultant for using AI to impersonate President Biden's voice in a robocall.
Broader Regulatory Efforts
California's efforts to combat deepfakes are part of a broader regulatory push:
- State and Federal Legislation: More than two dozen states are working on or have enacted legislation to regulate deepfakes, reflecting a national concern over the misuse of AI in elections.
- Collaboration with Advocacy Groups: Lawmakers have worked with advocacy groups like the California Initiative for Technology and Democracy to address political deepfakes, highlighting the collaborative approach needed to tackle this issue.
Establishing Safety Protocols for AI Models
Beyond addressing deepfakes, California is also focusing on establishing comprehensive safety regulations for large AI models:
- Data Disclosure: Developers will be required to disclose the data used to train their AI systems, enhancing transparency and preventing potential disasters.
- Safety Protocols: State agencies will need to implement safety protocols to mitigate risks and prevent algorithmic bias before entering into contracts involving AI systems for decision-making.
Worker Protections
The legislation also includes measures to protect workers from being replaced by AI:
- Protection for Voice Actors and Narrators: A measure has been approved to shield workers, including voice actors and audiobook narrators, from being supplanted by AI-generated replicas, aligning with the terms of the SAG-AFTRA contract.
- Prohibition on AI in Call Centers: State and local agencies will be prohibited from using AI to replace employees in call centers.
Educational Initiatives
To adapt to the rapid advancements in AI, California is also focusing on enhancing AI literacy:
- AI Competencies in Curricula: A proposal calls for a state working group to explore the inclusion of AI competencies in the curricula for mathematics, science, history, and social studies.
- Guidelines for AI in Education: Another initiative seeks to develop guidelines for the application of AI within educational settings.
These efforts underscore California's commitment to addressing the multifaceted challenges posed by AI, ensuring both the integrity of the electoral process and the protection of workers and citizens in the face of rapidly evolving technology.