False videos pose a threat. They are frequently circulated on social networking sites like Whatsapp, Facebook, and Twitter, harming reputations and even resulting in financial losses for people, businesses, and governments. Since detection programmes must send films for analysis and there is a considerable wait of hours to receive the results, detecting deepfake movies in real-time is difficult. However, Intel has commercialised "FakeCatcher'' as part of its Responsible AI work. It is a technique that has a 96% accuracy rate for identifying bogus videos. Its deepfake detection platform, according to the business, is the first real-time deepfake detector in the world to provide results in milliseconds.

“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” says Ilke Demir, senior staff research scientist in Intel Labs. Companies will invest up to $188 billion in cybersecurity solutions in the future, according to Gartner, as the threat posed by Deepfake videos increases.

The majority of deep learning-based detectors scan raw data for indications of falsity and pinpoint the flaws in a video. FakeCatcher, on the other hand, analyses what makes us human—subtle "blood flow" in a video's pixels—to search for authentic clues in real recordings. Veins change hue when hearts pump blood. Algorithms convert the blood flow signals, which are gathered from all over the face, into spatiotemporal maps. Intel can then instantaneously determine whether a video is authentic or false using deep learning.

The FakeCatcher detector, created by Demir in association with Umur Ciftci of the State University of New York in Binghamton, is used by Intel's real-time platform. It runs on a server and connects via a web-based platform using Intel hardware and software.

FakeCatcher has a variety of potential applications, according to Intel. Social media platforms can utilise it to stop users from posting damaging deepfake videos. The detector can be used by Global news organisations, who routinely do fact checks, to prevent unintentionally boosting falsified footage. Additionally, charitable organisations might use the platform to democratise deepfake detection for all users.