Expert Trainer: Dr. r. david edelman

Dr. R. David Edelman is the IPRI Distinguished Fellow at the Massachusetts Institute of Technology (MIT) and an non-Resident Senior Fellow at the Brookings Institution, where the programs he leads focus on cybersecurity, geopolitics, and AI. He teaches in MIT's Department of Electrical Engineering & Computer Science, sits at the Computer Science & AI Lab, and has been working at the intersection of AI and public policy for over a decade — including a career in Federal government, under Presidents of both political parties, at the State Department and White House.


KEY IDEAS

We’re currently in a perilous moment where the sophistication and accessibility of AI technology outpaces our tools to respond.

Access to generative artificial intelligence tools, such as ChatGPT, DALL-E, and others, is widespread, and technologies that were previously research tools are now in the hands of individuals, organizations, and foreign actors. At the same time, our detection capabilities are limited.

AI can be used to create convincing text, video and audio content, including mimicry of politicians and public figures.

Fake audio and video content is relatively easy to make, especially when the clips are short. See the video for examples of AI’s impersonation capabilities. Pre-recorded and live audio and video can be fabricated, and AI can capture things like breath and ambient sound that make these clips feel authentic.

Effective AI-generated content uses plausible scenarios and creative scene-setting.

When bad actors want to sow distrust in elections, they will think of creative but plausible scenarios that align with people’s preconceived notions — this is the human part of AI-generated disinformation. Effective false content will draw on the established messages and themes that we discussed in the previous module.

How should we respond to content we suspect is AI-generated?

As these technologies evolve, it is more important than ever to pause and evaluate the content we see online. If you see something that seems plausible but extreme, give it a second look. Just as we know that text and static images can be doctored, we now must apply that same critical lens to audio and video. Give yourself permission to be fooled — this technology is incredibly sophisticated! As with all disinformation, report misleading content to appropriate channels like ReportDisinfo.org. Finally, if you circulate this content, do so privately.