How to Spot a Deepfake
Contains information about AI and deepfake, how they work, how to identify and avoid misinformation (hoaxes) or propaganda.
What is Deepfake
Deepfake is a technology based on artificial intelligence (AI), especially deep learning, that is used to manipulate images, videos, or audio so they appear to show someone doing or saying something that never actually happened.
How Does Deepfake Work?
Deepfake technology uses deep learning algorithms such as:
Generative Adversarial Networks (GANs): This involves two neural networks competing with each other — one generates fake content, and the other tries to detect whether it's fake. Over time, the generated content becomes increasingly realistic.
Autoencoders: These are used to learn and reconstruct a person’s face, and then overlay it onto someone else’s face in a video.
Examples of Deepfake
Video: A celebrity’s face is placed onto another person’s body, making it look like they’re doing or saying something they never did.
Audio: A person’s voice is mimicked to create fake recordings.
Image: A person’s face is realistically inserted into a photo.
Deepfake Impact
Fraudster
Deepfake can mimic other people, such as your facial image, gestures, voice, skin texture, and other features. Let's take an example your face or photo is indexed by a search engine, then a hacker takes your photo and uses it as a dataset for creating your deepfake, whether static or dynamic in real-time. Such deepfakes can be used by fraudsters for identity theft, bypassing facial verification, and other activities such as money laundering, illegal goods transactions, or hacking.
Scamming

This is similar to number 1 fraudster, but I will explain it from the scammer's perspective. There are many scammers in the world who use deepfakes for scams such as love scams, online dating, and other types of fraud. For example, scammers can use deepfake photos to deceive you, such as fake women, fake women's voices to seduce potential victims, or offer jobs and other things, or even blackmail you. They can create deepfake pornographic content using your face and then spread it on social media or adult websites, then demand ransom to redeem and delete your content. This can harm others!
Propaganda and Fake News
Propaganda or fake news, the creation of propaganda content in today's era has made it much easier to produce compared to traditional methods. Let's take example. in Indonesia (I'm not surprised) – there are still a significant number of people who believe in deepfake content or AI-generated content. During elections, presidential campaigns, or when the president meets with Prime Minister A to take geopolitical actions, or AI-generated content containing global obligations or other matters, this is highly ironic. Additionally, other issues such as creating fake videos to manipulate public opinion, fundraising, and making money—such as monetizing social media accounts to generate high traffic and earn money—are equally ironic.
War Weapon
Weapons or military equipment. During times of war, such as in Ukraine and Russia, Israel and Palestine, there is a lot of deepfake content, such as fake weapon videos, missile attacks, air defense, and other things, even though this content has been created with AI by taking examples from video games. There are many semi-realistic war games, and it is very difficult for the general public to distinguish whether it is from a video game or AI. However, in this case, I rarely come across it, but there are times when I frequently find such content, such as game video clips being used as war documentation. For example, you can search for it on social media. There is a examnple again from deepfake, often come across deepfakes being used as weapons of war, such as for opinion manipulation and propaganda, or fake war footage generated by AI or from video games, which are then uploaded to social media. Although it seems rare for video game footage to be used in war contexts, I have encountered such cases.
Identity Theft
Identity theft, this is the same as the fraudster in point one, but in this point I will explain again about identity theft, with deepfake you can easily generate other people's faces to take over other people or manipulate the public for personal gain. And they will use your photos and videos that are spread on the internet then they will generate and become your persona, this will have a bad impact because you will be the one who will happen, let's take an example, for example if you want to enter a room you have to do a face scan, well they can do video injection or create images with your face, 3D masks and others to verify then they do bad things in your name, although this seems like a rare example but sometimes this potential can be used for identity theft.
Pirarcy
Piracy, for this point I don't know much about the law and what the law is like in AI, but in my opinion AI can imitate as I explained above, with the existing dataset so they take content from images and other things for AI material in analyzing data and machine learning, for example AI can generate facial videos from all over the world, where does the data come from? Then take HAKI (Intellectual Property Rights) or imitate other people's images without permission and other things. Although this is not necessarily piracy, in my opinion AI can be used for piracy with a certain context of purpose such as harming others
Where Data Come From?
During my study of AI such as ML datasets come from open source, I have studied image comparison and gesture detection in college with the Python programming language and its libraries, I collect data from the internet such as kaggle, public datasets and photos that I collect myself. For example, if you use gpt chat then you upload your face in it, the dataset will increase from your own input data then AI can perform and analyze the data you provide, some come from third parties maybe buy or rent and AI models will do data scraping to collect their data and store it as their dataset
How to Spot the Deepfake?
Deepfake still has gaps in the process of creating images, for example small details, buildings and accessories and colors and textures or complex details, while I tried to do the prompt myself using chat GPT or Gemini, sometimes the AI missed something, for example a missing finger, unsynchronized mouth movements and still stiff, but sometimes I see posts on Facebook and other platforms that are almost perfect but still have gaps in small details such as accessories, shadows and textures on the clothes they wear. There is also a stiff and strange sounding voice access, but for lay people who don't know this, it will be difficult to differentiate it.
Study Case 1

Look carefully at the picture of the person in hijab, at first glance it looks like a real photo and was taken with an expensive camera or smartphone, but if you look in more detail there is a miss, see the picture below.

There is a small gap in the finger, which is where the finger is not perfect in processing the image creation, look it closer check the image below

There was a failure in face processing, there were 2 faces that failed during the image creation process. Ok look it closer, check the image below

There is a man in black wearing a hijab, basically the hijab is used for women, not men. Last check again

Check the glass, there is spilled food or drink, but the direction of the glass is strange, such as holding the glass in an abnormal way, like a picture stuck on it, not like holding the glass properly.
Study Case 2

Look at the picture above, there is a couple with an orange McLaren or Mazda RX-7 car, at first glance it looks like an original (the picture is close to perfect) photo taken with a DSLR camera with complex settings such as ISO, Shutter speed and other effects, but if you look closely there is still a GAP, see the analysis below.

The rearview mirror on the car, in my opinion the placement of the side of the car's rearview mirror is a bit strange, because the rearview mirror is too inward, I tried to see the original McLaren image on Google

Look it closer again

The reflection on the car is not visible, you can see there is no reflection of the middle car from the right and left cars, there should be a little reflection, and also see the woman's shadow is not in the car, but on the man there is a direction of the shadow of the hand when holding the cigarette.
Last, check with online tool



The results displayed are indeed from the AI prompt, you can see from the analysis side that I have done and the shadows and light and strange color textures, you can see the resouces on Jieyab89 Github
Test original image taken by human
Image

Results

NOTES : I put all these AI images from a Facebook group (but I forgot the name of the group) but the content in it is sharing prompts and AI generated image results.
What does AI do?
Usually AI will do images like face fusion but this depends on the user's prompt, sometimes AI will do a face combination, and the image texture is like a cartoon and smooth, misses the pores, facial wrinkles and other things, but this also depends on the user's prompt, AI by default will do it like that but if the user's prompt is clearer and more complex then the results will also be good, if during the analysis you find difficulties then what must be done is check with online tools and manual editing to perform image analysis or image forensics. But if the deepfake is in the form of a video, see the movement is it stiff? Wrinkles, facial expressions, blur and voice when he speaks, AI tends to be stiff for voice problems and strange expressions in his speaking accent. But for images I think it's a little difficult to detect
What makes AI good?
What makes AI good is the dataset and the prompt system and user prompt, let's take an example of a dataset. The more datasets, the better the AI will be, for example voice samples, faces, buildings, movements or gestures, media data with good resolution (HD) and samples uploaded by users, AI will process the data it has. You can use software such as face swap or others or use AI agents available on the internet such as Gemini and Chat GPT as far as I know, these 2 AI agents are good at managing images, the more prompt user details, the AI will understand your commands and this will be difficult to analyze, unlike deepfakes in the past there were still gaps in image processing, stiff voices and stiff movements, for the current era I think it is much better, I am afraid to update AI if it gets better and is misused by irresponsible people, try read on internet about AGI and image processing
Prevent Deepfake
System or Application
Every security system such as face verification is vulnerable to bypass such as deepfake fake face verification, synthetic mask with face, print image with face, video injection, bypass face verification. How to do the validation?
Use technologies that support image or video comparisons during face verification. Choose a technology that has many datasets
Use live gestures, such as looking right and left up and down, smiling or gaping to make it difficult to deepfake
Limit threads during face verification, provide a limited limit to make it difficult for attackers to do video injection.
Verification limitations, for example 1 day limit 3x errors, the user will be blocked based on IP and device id or name.
Addition of a block feature every user has anomalies during face verification
Use eye retina to verify like apple face id
Use color when verifying, for example if the user verifies there are colors on the screen blue, yellow, green randomly so when the light reflection is different it will be rejected or detected
Rotation and moving light and background verification
Sound detection e.g like noise and speech for detection if there are anomalies in the voice reject verification
Individual
Photo and vidio
For individuals, you can do several ways such as reducing publishing your face on open sites, forums or other things, for example from uploaded videos and photos, the more data and HD quality, the attacker can generate deepfake your face easily, every photo use masks, filler effects, reduce resolution to make it difficult for attackers to collect data
Sound or voice
Reduce your voice, you can use sound effects such as echo, special effects to protect your voice, many deepfakes have a bad and stiff voice so this is difficult to do if the voice data is very limited, but there are times when AI can do this easily
Gesture
Gesture, reduce the curves of your body, chest, and body, this can be done by deepfake as a dataset from the attacker, many AI models can do such as stripping photos, making body curves in the chest plane and others
What if you are firgur public? Indeed, public figures cannot escape social media activities such as uploading flog videos, photo models and other things, so there are many deepfakes available from public figures because the dataset is wide and makes it easier for attackers to create this deepfake. If you are a public figure and you get blackmail then my advice is to spread this in your fans and report it to the authorities and the platform to takedown the content that is spread. In digital footprints it is difficult to delete, so there are times when the video will reappear
Privacy
Do verification settings on your account and your smartphone, for example, set your privacy so that it cannot be indexed by search engines, only your friends can see, limit your age and hide your activity from trackers, for smartphones you can do many ways depending on the brand of your smartphone, And can do double security by buying anti-virus, vpn and reducing application permissions to access your smartphone, don't forget to enable 2FA for each of your accounts or your smartphone, lock your smartphone if lost. Or buy an anti-tapping smartphone that is difficult to tap or forensic
Information
To protect and verify your information from deepfakes, you can do several things, here are some tips if you find content or media, make sure there are 5W + 1H
Verify Media such as images, videos, you can verify first such as using image reverse or similar image, scan using the media content with a deepfake detection tool (if you don't find any loopholes) this is the main way to protect you from fake videos or AI deepfakes
Geolocation verification, verify where the place is, is it the same as the media? Make sure it's the same as what you found
Do a time determination (chronolocation) by checking the direction of the sun and shadows, this can be used as an indication of when the media was taken? Is the data the same?
Who made the post? Check the account or who published the media? For example, a trusted source or just ordinary people and others, it should be noted that even though it is from a trusted source, we still validate the data they have presented
What is the basis? Find out why they published the media? Why is there a narrative like that? If you have understood it then try to find other clues by searching for the same content or media for that reason
Understand the context and content of the narrative. Does it make sense? For example, snowfall in Bali is highly unlikely in terms of natural science and geography, so look at the inconsistencies and understand the content of the narrative.
Last updated