In recent years, deepfake technology has become increasingly sophisticated and accessible, raising concerns about its potential misuse for malicious purposes. In response, Intel has developed a cutting-edge deepfake detector called FakeCatcher, which utilizes advanced machine learning algorithms to identify and flag synthetic media. To gain a deeper understanding of this innovative technology, the Egyptian Gazette recently had the opportunity to speak with Ilke Demir from Intel, who played a key role in the development of FakeCatcher.
In this exclusive interview with The Egyptian Gazette newspaper, Demir provides insights into the capabilities and limitations of the tool, as well as the broader implications of deepfake technology for society and the media industry.
- In the age of Artificial Intelligence, innovative technologies are constantly emerging that are sometimes misused, the latest of which is deepfake videos. As a technology expert, can you explain how such content is produced?
- Deepfakes rely on software to create computer-generated videos; this is put to good use when it comes to protecting the identity of individuals in different contexts, such as anonymized healthcare activities, hiding identifiable biological information of minors, and privacy-enhanced social networks that utilize deepfakes for banning access to faces by unauthorized users. However, deepfakes can also be used to create manipulated content that mimics real people’s facial and voice patterns causing harm, such as illegal activity, identity theft, forging and propaganda. In recent years, many videos have gone viral with the aim of spreading wrongful information and deceiving the public. Such videos are considered a growing threat because they are hard to detect, especially in real-time, which makes it harder to curb their damaging consequences.
- Does intel support the good use of deepfake?
- The worst side-effect of deepfakes is impersonation, so we developed a multi-source image synthesis approach to eliminate this cause. When we mix different regions from different source images to create one image, the generated deepfake is a completely new and innocent one.
- In addition to multi-source synthesis, we also developed novel techniques for creation and generation of 3D digital humans, which are already being used in AR/VR productions.
- How is Intel contributing to safeguarding against the fraudulent use of deepfake?
- To restore trust in media and empower users with the technology to distinguish between real and fake content, Intel has launched the world’s first real-time deepfake detector which returns results in milliseconds. FakeCatcher, the core of the system, can detect fake videos with a stunning 96% accuracy rate.
- How does FakeCatcher work and what technologies ensure accurate detection?
- FakeCatcher is a deepfake detection algorithm that uses heart beats as the authenticity cues in real videos. Intel’s real-time platform uses Intel hardware and software to analyze what the human eye cannot see: subtle “blood flow” in the pixels of a video. These blood signals are collected from all over the face of the individual and are then translated using algorithms into unique spatiotemporal maps (PPG maps) that are run through a neural network classifier. The result is instant detection whether the video in question is real or fake.
- How is FakeCatcher different from other deepfake detection platforms?
- Intel’s deepfake detection platform is the first real-time detection platform with results developed in milliseconds (30 frames per second after the first segment of 64 frames) as opposed to hours or minutes. This software analyzes what the human eye cannot see: subtle “blood flow” in the pixels of a video and can detect fake videos with a stunning 96% accuracy rate. When compared with seven of the world’s leading deepfake detectors with complex neural architectures, FakeCatcher comes out more than 8% ahead of the app algorithm with the next-best performance.
- How will FakeCatcher benefit different users?
- There are several potential use cases for FakeCatcher. Social media platforms could leverage the technology to prevent users from posting harmful deepfake videos, similarly, news organizations could employ the deepfake counter to avoid inadvertently publicizing and spreading manipulated videos. And nonprofit organizations could employ the platform to democratize detection of deepfakes for everyone.
- When can we expect to see enterprises adopting FakeCatcher? How will it integrate into their workflows?
- FakeCatcher is currently ready for customers; we have interest from potential users such as social media platforms, global news organizations, and nonprofit organizations for productization, and we will be deploying the real-time deepfake detection platform in the workflow of our customers. Those organizations may elect to open the platform to their consumers or keep it internal.
- The FakeCatcher platform is based on Open Visual Cloud, which is platform agnostic, so the customers can run it on their cloud providers’ Intel-powered instances, or they can run it on their Intel servers on-site. We are also open to providing support during deployment.
- Are there more features in progress that we can expect to see as an upgrade to the platform?
- FakeCatcher informs us whether a video is fake, however we do not have any information about its origin. Our HYPERLINK “https://ieeexplore.ieee.org/document/9304909” follow up research reveals the source of each deepfake.
- Similar to FakeCatcher, we can utilize the projection of photoplethysmography (PPG) signals in the generative space of each GAN to trace which GAN created any given deepfake.
- Deepfake source detection is an important milestone about tracking and tracing deepfakes to their origin and towards understanding their creation and spread.
- Our team is also exploring other authenticity clues for deepfake detection. We developed another deepfake detector based on eye and gaze consistency in videos. Human gazes are almost always convergent or co-planar. However, deepfake eyes exhibit low or no correlation, like googly eyes.
- How do you ensure that responsible deepfakes are harmless when produced? and are these kinds of media detectable by detection tools?
- We use synthetic images as source images while creating privacy-enhancing deepfakes. We lift all faces to the face embedding space, select one of the furthest (the most dissimilar) faces with similar age and gender in that space as the source image, and create a deepfake using that source. Every time we select a source image, we select within a threshold, so all resulting deepfakes for the same face are usually different. Moreover, we experimented with two spaces of synthetic faces, a popular dataset created by StyleGAN, and a balanced dataset where faces have equal distributions in terms of genders, ages, and skin tones.