This past Wednesday, a group of researchers employed by Facebook, “in partnership with Michigan State University (MSU),” unveiled a “research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.” Essentially, Facebook has created a method to give itself the ability to determine the origins of so-called “deepfake” videos “in real world settings.” This is a breakthrough because the “deepfake image itself is often the only information detectors have to work with.”
As reported by CNBC, “Deepfakes are videos that have been digitally altered in some way with AI.”
“They’ve become increasingly realistic in recent years, making it harder for humans to determine what’s real on the internet, and indeed Facebook, and what’s not,” stated the news source. “The Facebook researchers claim that their AI software — announced on Wednesday — can be trained to establish if a piece of media is a deepfake or not from a still image or a single video frame. Not only that, they say the software can also identify the AI that was used to create the deepfake in the first place, no matter how novel the technique.”
“Deepfakes have become so believable in recent years that it can be difficult to tell them apart from real images. As they become more convincing, it’s important to expand our understanding of deepfakes and where they come from,” Facebook explained in a blog post. “In collaboration with researchers at Michigan State University (MSU), we’ve developed a method of detecting and attributing deepfakes. It relies on reverse engineering, working back from a single AI-generated image to the generative model used to produce it.”
Facebook continued in the blog post to explain that the majority of the current scientific community research has been focused on the detection of deepfakes.
“Beyond detecting deepfakes, researchers are also able to perform what’s known as image attribution, that is, determining what particular generative model was used to produce a deepfake,” Stated the social media giant. “Image attribution can identify a deepfake’s generative model if it was one of a limited number of generative models seen during training. But the vast majority of deepfakes — an infinite number — will have been created by models not seen during training. During image attribution, those deepfakes are flagged as having been produced by unknown models, and nothing more is known about where they came from, or how they were produced.”
The research group went on to claim that their “reverse engineering method takes image attribution a step further by helping to deduce information about a particular generative model just based on the deepfakes it produces,” and that this could represent “the first time that researchers have been able to identify properties of a model used to create a deepfake without any prior knowledge of the model.”
“Through this groundbreaking model parsing technique, researchers will now be able to obtain more information about the model used to produce particular deepfakes. Our method will be especially useful in real-world settings where the only information deepfake detectors have at their disposal is often the deepfake itself,” concluded Facebook. “In some cases, researchers may even be able to use it to tell whether certain deepfakes originate from the same model, regardless of differences in their outward appearance or where they show up online.”