These Wonks Study the Dangers of Artificial Intelligence. But the harm can be mitigated if these videos can be automatically detected.

What features are extracted and how they are defined is known only to the CNN. Your feedback will go directly to Tech Xplore editors. To avoid falling prey to a similar flaw, we have trained our system on a large library of images of both open and closed eyes.
Yang et al. The more images used to train a deepfake algorithm, the more realistic the digital impersonation will be. Even for people who are photographed often, few images are available online showing their eyes closed. When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the internet that can be used as training data. This study highlights the boon and the bane of AI. While AI can be used to damage reputations and spread misinformation, it can also be used to prevent infodemics, the harms of which are sharply felt amid crises like the COVID-19 pandemic. to Deepfake videos, with 85.62% and 95.00% false accep-tance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are neces-sary. This isn’t the final word on detecting deepfakes, of course. Making a deepfake video is a lot like translating between languages. AI algorithm detects deepfake videos with high accuracy.

Unfortunately, just like any technology in the world, AI can be used by those with less noble intentions. There are still flaws in this new type of algorithm. That’s what would be normal to see in a video of a person talking. Neither your address nor the recipient's address will be used for any other purpose. You can be assured our editors closely monitor every feedback sent and will take appropriate actions. The second level comprises two main components: a convolutional neural network (CNN) and a long short-term memory (LSTM) stage. At the end of each post you will find a reference to the original post. Upon comparing the original and deepfake videos, the LSTM network easily detects inconsistencies in the frames of the latter. The algorithm itself can be divided into two levels. could be giving us something informative to read? This document is subject to copyright. Their work could be a milestone in the quest to counter one of the many kinds of infodemics that we face today. Apart from any fair dealing for the purpose of private study or research, no But it’s a start, and offers hope that computers will be able to help people tell truth from fiction. Thank you for taking your time to send in your valued opinion to Science X editors. Regulating Artificial Intelligence: Why We Need Expert Input To Limit Risks, Opinion: Responsible AI starts with higher education, Leverage AI and machine learning to address insider risks, What Agencies Should Consider Before Deploying AI, The Future of Lawyers: Legal Tech, AI, Big Data And Online Courts. This gives us an inspiration to detect deepfake videos. Your email address will not be published. and Terms of Use. Detecting blinking There are still flaws in this new type of algorithm. This study highlights the boon and the bane of AI. Deepfake detection method. A portion of the frames of these videos were labeled and fed to the algorithm as training data; the rest were used as a validation dataset to test if the program could correctly catch face-swapped videos. Science X Daily and the Weekly Email Newsletter are free features that allow you to receive your favorite sci-tech news updates in your email inbox, Detecting 'deepfake' videos in the blink of an eye, Deepfakes: temporal sequential analysis to detect face-swapped video clips using convolutional long short-term memory, Loon balloon breaks record for stratospheric flight duration—stays aloft for 312 days, A machine leaning model that incorporates immunological knowledge, An attacker can steal sensitive user data over the phone using smart speakers, Multifunctional skin-mounted microfluidic device able to measure stress in multiple ways, Capacitivo: A contact-sensitive technique that can be used to make smart tablecloths. The content is provided for information purposes only. Then they synthesize images of another person’s face making analogous movements. Deepfake detection method. The algorithm can identify them using only about two seconds of video material. Your email address will not be published. or, by SPIE. Artificial intelligence (AI) contributes significantly to good in the world. Can Machine Learning Algorithms Be Patented? A portion of the frames of these videos were labeled and fed to the algorithm as training data; the rest were used as a validation dataset to test if the program could correctly catch face-swapped videos. When we calculate the overall rate of blinking, and compares that with the natural range, we found that characters in deepfake videos blink a lot less frequent in comparison with real people. People who want to confuse the public will get better at making false videos – and we and others in the technology community will need to continue to find ways to detect them. Unfortunately, just like any technology in the world, AI can be used by those with less noble intentions. What features are extracted and how they are defined is known only to the CNN.
Before they can work properly, deep neural networks need a lot of source information, such as photos of the persons being the source or target of impersonation. The new technique created by USC researchers requires … This text is priceless. Most of the academic research surrounding Deepfake seeks to detect the videos. While AI-based deepfake video detection methods do exist, researchers from Thapar Institute of Engineering and Technology and Indraprastha Institute of Information Technology in India have developed a new algorithm with increased accuracy and precision. An overly trusted detection algorithm that can be tricked could be weaponized by those seeking to spread false information. Broadly, their approach consists of a classifier that decides whether an input video is real or (deep) fake.

This technique can be used to create compromising videos of virtually anyone, including celebrities, politicians, and corporate public figures. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form. One of them has to do with how the simulated faces blink – or don’t. In this era of unprecedented connectedness and instant communication-when news can become viral in a matter of hours-such videos can cause great harm to those in the videos as well as to the social and cultural psyche of the associated communities. So far, people have used deepfake videos in pornography and satire to make it appear that famous people are doing things they wouldn’t normally. Standard deepfake detection models analyze videos frame-by-frame to spot any sign of manipulation. Droits d'auteur © 2010–2020, The Conversation France (assoc. Such is the case with a certain AI-based technique called “deepfake” (combination of “deep learning” and “fake”), which uses deep neural networks to easily create fake videos in which the face of one person is superimposed on that of another. Without training images of people blinking, deepfake algorithms are less likely to create faces that blink normally. The algorithm can identify them using only about two seconds of video material. part may be reproduced without the written permission. For a total of 181,608 real and deepfake frames retrieved from the dataset of videos collated in this study, the proposed method achieved higher accuracy (98.21%) and precision (99.62%), with a much lower total training time. To be more specific, it scans each frame of a video in question, detects the faces in it and then locates the eyes automatically. All rights reserved. "Deepfakes" are out there. Click here to sign in with Deepfake detection methods mostly rely on the artifacts or inconsistency of intrinsic features between fake and real images or videos. Researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting. Original post: https://techxplore.com/news/2020-07-ai-algorithm-deepfake-videos-high.html, Thanks for sharing your thoughts. Phys.org internet news portal provides the latest news on science, Medical Xpress covers all medical research advances and health news, Science X Network offers the most comprehensive sci-tech news coverage on the web. These tools are easy to use, even for people with no background in programming or video editing.

The technology is improving rapidly, and the competition between generating and detecting fake videos is analogous to a chess game.