Skip to main content

U.S. lawmakers call for deepfake countermeasures

Image Credit: Jomar

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Members of Congress today sent a letter to National Intelligence director Dan Coats asking for a report from intelligence agencies about the potential impact of deepfakes on democracy and national security. Members of Congress speaking with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey also expressed concern about the potential impact of manipulative deepfake videos in a Congressional hearing last week.

The letter was signed by Rep. Stephanie Murphy (D-FL), Rep. Carlos Curbelo (D-FL), and Rep. Adam Schiff (D-CA), a ranking member of the House Permanent Select Committee on Intelligence, which works with the Office of the Director of National Intelligence, NSA, and CIA.

“By blurring the line between fact and fiction, deepfake technology could undermine public trust in recorded images and videos as objective depictions of reality,” the three members of Congress said in the letter. “As deepfake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security, with broad and concerning implications for offensive active measures campaigns targeting the United States.”

Deepfakes are videos made using general adversarial networks (GANs) to do things like place the face of one person onto another person’s body. Forged videos, images, or audio could be used to target individuals for blackmail or for other nefarious purposes, the letter said.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Schiff, Murphy, and Curbelo requested that intelligence agencies report to Congress no later than December 14 to provide recommendations on how to counter the use of malicious tech, like deepfakes, and to delineate which agency should lead content-monitoring for deepfakes. They also asked that the report be unclassified and that it detail any suspected use of deepfakes by foreign governments or individuals that may have already taken place.

While the Trump administration has drawn criticism for its less-than-robust strategy to protect the upcoming November election from foreign adversaries, Coats has said “warning lights are blinking red” in reference to Russian cyberattacks or propaganda designed to interfere with the United States election process.

More than a dozen intelligence agencies have overwhelmingly agreed that an operation backed by the Russian government attempted to meddle in the 2016 election to both elect Donald Trump and undermine trust in the democratic process.

The term deepfakes wasn’t familiar to a lot of people until late 2017, when Reddit communities sharing software for deepfakes and videos created with deepfakes grew in popularity. Deepfakes were initially used primarily to put the faces of famous actresses like Scarlett Johansson and Jennifer Hudson on the bodies of porn stars, but also to make malicious videos of classmates or, in a more light-hearted use case, to try to put Nicholas Cage in every movie possible.

Following the proliferation of deepfake videos in the past year and fears about their potential impact, a number of public and private efforts are underway to create AI capable of identifying malicious videos.

In July, members of DARPA’s Media Forensics program undertook a number of test exercises to automatically detect deepfakes or manipulated images or videos, in part by looking for cues like unnatural blinking in videos, a DARPA spokesperson told VentureBeat in an email.

Startups like Truepic, which raised an $8 million funding round in July, are also attempting to detect deepfakes as a service.

In other news at the intersection of U.S. national defense and AI, the Pentagon pledged $2 billion in AI systems investments last week, and the Joint AI Center, led by the Office of the Secretary of Defense, is scheduled to open later this month.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.