Online Fauxmance: The Dangers of Deepfake AI Chatbots

Deepfake AI chatbots are ruining online dating experiences for many users.
The evolution of technology is a double-edged sword; it can either propel society forward or bring out untapped chaos to everyone.
In recent years, AI has been proudly introduced and used in commercial and industrial processes, and as an analytical tool that the public can use freely. However, while many praise the emergence of the new age of AI, many are against it.
Concerned people believe AI can damage social media, create fake online dating experiences, and steal your data if left unchecked or unmoderated.
In the world of dating, the rise of chatbots using AI-generated deepfake images is becoming a cause for concern. Dating apps and sites are now riddled with these bots, creating discomfort and bad experiences for users.
Why Deepfake Images are a Serious Threat
A deepfake image is a fraud picture generated by AI software that uses a real person’s face and edits it onto someone else’s body.
While the general thought of deepfake is for lighthearted commercial use for funny Facebook posts, the use of deepfake goes deeper and darker.
Deepfake images are stolen photos used by scammers to trick people on dating sites, throw smear campaigns towards electoral candidates, and even endanger the reputation of celebrities and normal citizens by generating their faces on explicit content.
South Korean Lawmakers are Against Deepfakes
According to the East Asia Forum, South Korea is legislating towards the criminalization of users who engage in deepfake software or keep deepfake images and videos of people.
This is because in the year 2021 alone, there have been at least 812 police reports on AI-generated images used as sexual/pornographic material.
The Koreans and Japanese are known for their modesty and conservative nature; any explicit exposé is shunned.
Women, minors, and politicians have been scandalized by the internet as a “prank,” but little do they know that they are endangering their victims’ social reputation.
Chatbots: Online Companions or Data Thieves?
To make it simple, a chatbot is like an online fake person playing back questions asked, while generating new answers, and learning the flow of the interaction while chatting with unsuspecting users.
Chatbots are generally virtual assistants used in finding content, displaying analytics, and helping generate or modify your work.
In dating, they’re used as a substitute for real people on dating apps.
Chatbots are the New Online Girlfriends/Boyfriends
Chatbots are the new rage of fake online dating, allowing users to have virtual relationships, minus the real commitment.
People have spent their time creating scenarios, roleplays, and even giving real stories about their lives because they’re lonely and tired of communicating with people who don’t care about them.
Moderators and developers of these dating app chatbots are milking money off of lonely people by giving them what they desire—a “person” who understands their needs and wants.
And it’s not even a real person, it’s just a program filled with code and analytical algorithms that responds to your message by learning the manner and method of your reply in real time.
Stealing Money Through Online Fauxmance
In a recent research from McAfee, an anti-virus and anti-AI protection software, they revealed that 1 in 3 people have fallen for online romance scams that were conducted by AI chatbots, or by scammers using deepfake AI-generated images.
Without effective cybersecurity laws, fraudsters using catfish scammer photos for chatbots, and the people programming them get away with exploiting gullible users into sending large amounts of cash.
Rob S., a cybersecurity professional, experienced an AI romance scam firsthand. He thought he was talking to a real person.
It wasn’t until the woman he was messaging started to reveal something uncanny.
Her messages felt repetitive and a little too generic, but the worst part was that he fell for someone who was never even there in the first place.
You might call it Online Fauxmance—catfished and scammed by a bot, programmed to lovebomb and sweet talk their way into your bank accounts.
Deepfake Imagery Promotes Catfishing
Aside from smear campaigns and exploitation of victims’ faces used for explicit purposes, the use of deepfake imagery increases the likelihood of being catfished.
With AI software consistently developing images and graphics to become more lifelike by studying real users, deepfake images are now becoming more believable, especially for desperate and lonely people who can’t identify between AI and reality.

Be careful of deepfake AI chatbots! They’re known to lure you into a fake online dating relationship.
Stay Safe! 3 Methods to Avoid an Online Fauxmance
1 in 3 people have experienced an online romance scam. It’s not a laughing matter.
People’s lives have changed because they were unaware that a bot with fake pictures was catfishing them into sending money.
If you’re ever chatting with someone suspicious, don’t hesitate to verify their existence—DO NOT BE AFRAID!
Verify
Try to ask them if they’re up for video calls; that’s a surefire way to verify if they’re real or not. Don’t forget to make sure they’re using the selfie camera.
If they agree to do a video call, but choose to use a webcam, be very careful, as there are applications that can use pre-made videos.
To ensure that they are real, don’t hesitate to ask them to do a silly pose.
A peace sign or a thumbs up is too predictable, so it’s better to ask them to write down the current date on a piece of paper and show a selfie.
It might sound awkward asking these from someone you barely know, but that’s the price of safety nowadays.
Anti-AI Software
People need to be aware of deepfake images—you may not know it, but there are applications and software to identify whether images used online are real or not.
You can download software or go to different verification websites to check whether their pictures are real or not. This type of software is not perfect and may have limitations, but it can provide a helpful indication.
Simply screenshot or download their photo and upload it to the software. If the photo is identified as AI-enhanced by at least 30% or is completely AI-generated, then stop the conversation and block the bot.
Go Offline
There’s nothing shameful about seeking romance and intimacy, but there is a certain stigma towards people who replace real connections with AI relationships.
While many have testified and reviewed that many of these chatbots with deepfake imagery are safe to interact with, it is best to avoid the fake online dating scene if you don’t want to be a victim.
In the meantime, you can give offline dating another shot—go find someone at the bar, a social club, or even a coffee shop. Go old school.
Meeting people in person allows you to see their facial expressions, know how they talk and respond, and study their behavior.
What Matters is Your Safety
At the end of the day, it boils down to your preferences. Whether you choose to navigate the online dating scene or go old school, what’s important is that you are safe and your dignity is intact.
It’s not about avoiding online dating altogether; it’s about being a smart dater, whether online or offline. Knowledge is power.
External References Used:
AI for Good or Evil? A Primer on Deepfakes and Chatbots, Fortress Security Risk Management, https://fortresssrm.com/ai-for-good-or-evil-a-primer-on-deepfakes-and-chatbots/.
“AI chatbots and companions – risks to children and young people.” eSafety Commissioner, 18 February 2025, https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people. Accessed 13 May 2025.
Heikkilä, Melissa. “Three ways AI chatbots are a security disaster.” MIT Technology Review, 3 April 2023, https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/. Accessed 13 May 2025.
Smith, Georgia, and Joseph Brake. South Korea confronts a deepfake crisis, East Asia Forum, 19 November 2024, https://eastasiaforum.org/2024/11/19/south-korea-confronts-a-deepfake-crisis/.
“South Korea to criminalize watching or possessing sexually explicit deepfakes.” CNN, 26 September 2024, https://edition.cnn.com/2024/09/26/asia/south-korea-deepfake-bill-passed-intl-hnk/index.html. Accessed 13 May 2025.
AI Love You: McAfee Research Reveals 1 in 3 Believe AI Chatbots Could Steal Their Heart, McAfee, https://www.mcafee.com/en-us/consumer-corporate/newsroom/press-releases/press-release.html?news_id=d2816915-4a89-40e2-b027-b6f2bcac97d3. Accessed 13 May 2025.
Dhaliwal, Jasdev. AI chatbots are becoming romance scammers—and 1 in 3 people admit they could fall for one, McAfee, 11 February 2025, https://www.mcafee.com/blogs/privacy-identity-protection/ai-chatbots-are-becoming-romance-scammers-and-1-in-3-people-admit-they-could-fall-for-one/.