You can’t trust everything you see online. It’s a statement that is probably more true now than ever. Artificial intelligence is used more and more to create videos that aren’t what they seem. Researchers at Rochester Institute of Technology’s Global Cybersecurity Institute are developing a tool to help journalists spot “deepfakes.”

A video from a few years back, involving actor and comedian Bill Hader’s appearance on a late-night talk show, is presented by professors at RIT as a perfect example of the power of artificial intelligence, or AI. Hader begins impersonating actor Tom Cruise, and through computer science techniques and video manipulation, Hader’s face transforms into Cruise’s. 

Such videos can be fun. But RIT researchers are also looking at the dangerous side of artificial intelligence technology – Deepfakes. This includes one depicting Russian leader Vladimir Putin, claiming Russian troops would leave Ukraine.

“Well, we know none of that’s true,” said Dr. Matthew Wright, RIT chair of computing security.

“AI can often be very confidently wrong,” said researcher and Ph.D. candidate John Sohrawardi. “Like 97% fake, but it's actually a real video.”

Sohrawardi and Wright are working on deep fake detection. They use a manipulated video of former President Barack Obama, made by actor and director Jordan Peele, as one example of how realistic a fake can be.

“It's just the president saying something, it sounds like, we have a sense that these things can be faked,” said Wright. “But when it's a controversial or confidential source, then maybe it's more believable and it can be potentially more dangerous.”

Researchers say their “DeFake” project is a tool for journalists which is still in the works to help spot deep fakes, especially if traditional fact-checking methods don’t work.

“This is where we come in and hopefully we are building a tool that is somewhat reliable,” said Sohrawardi. “But at same time, we don't want them to rely on them 100%.”

DeFake uses deep learning to analyze video for manipulations. Spotting a fake is just as important as confirming something is real.

“The other problem that we run into is that as people do become more aware of the technology, then it became becomes an opportunity for what we call the 'liars dividend,' where someone could really say something that they weren't supposed to say or do something where they weren't supposed to do get caught on tape, and then say No, that isn't real, that's a fake,” said Wright.

Researchers at RIT say “deepfakes” are scary on a personal level as well. Some who create fake videos use them for blackmail purposes. The DeFake project is still in the works. Researchers say they’re also working with the intelligence community to help law enforcement in terms of detecting media manipulation.

Staying ahead of AI technology is the other big challenge. It’s tricky, but potentially much more harmful.

“It’s certainly a danger that we see,” said Wright.