Meta’s world affairs president Nick Clegg: AI-generated election misinformation not but taking place at a systemic scale


As voters will head to polls this yr in additional than 50 nations, specialists have raised the alarm over AI-generated political disinformation and the prospect that malicious actors will use generative AI and social media to intervene with elections. Meta has beforehand confronted criticism over its content material moderation insurance policies round previous elections—for instance, when it failed to forestall the January 6 rioters from organizing on its platforms. 

Clegg defended the corporate’s efforts at stopping violent teams from organizing, however he additionally burdened the problem of maintaining. “It is a extremely adversarial house. You play Whack-a-Mole, candidly. You take away one group, they rename themselves, rebrand themselves, and so forth,” he mentioned. 

Clegg argued that in contrast with 2016, the corporate is now “completely completely different” in the case of moderating election content material. Since then, it has eliminated over 200 “networks of coordinated inauthentic conduct,” he mentioned. The corporate now depends on truth checkers and AI know-how to establish undesirable teams on its platforms. 

Earlier this yr, Meta introduced it will label AI-generated pictures on Fb, Instagram, and Threads. Meta has began including seen markers to such pictures, in addition to invisible watermarks and metadata within the picture file. The watermarks can be added to pictures created utilizing Meta’s generative AI methods or ones that carry invisible industry-standard markers. The corporate says its measures are in keeping with greatest practices laid out by the Partnership on AI, an AI analysis nonprofit.

However on the identical time, Clegg admitted that instruments to detect AI-generated content material are nonetheless imperfect and immature. Watermarks in AI methods aren’t adopted industry-wide, and they’re simple to tamper with. They’re additionally exhausting to implement robustly in AI-generated textual content, audio, and video. 

Finally that ought to not matter, Clegg mentioned, as a result of Meta’s methods ought to be capable of catch and detect mis- and disinformation no matter its origins. 

“AI is a sword and a defend on this,” he mentioned.

Clegg additionally defended the corporate’s choice to permit advertisements claiming that the 2020 US election was stolen, noting that these sorts of claims are frequent all through the world and saying it’s “not possible” for Meta to relitigate previous elections. Simply this month, eight state secretaries of state wrote a letter to Meta CEO Mark Zuckerberg arguing that the advertisements might nonetheless be harmful, and that they’ve the potential to additional threaten public belief in elections and the security of particular person election employees.

You may watch the complete interview with Nick Clegg and MIT Expertise Assessment govt editor Amy Nordrum under.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *