Bored Panda works better on our iPhone app
Continue in app Continue in browser

BoredPanda Add post form topAdd Post
Tooltip close

The Bored Panda iOS app is live! Fight boredom with iPhones and iPads here.

Woman Returns From Her Honeymoon To Find A Creepy AI Deepfake Video Of Her In An Ad
36

Woman Returns From Her Honeymoon To Find A Creepy AI Deepfake Video Of Her In An Ad

Interview With Expert Woman Returns From Her Honeymoon To Find A Creepy AI Deepfake Video Of Her In An Ad Woman Appalled After Finding Out Her Likeness Was Used To Create A Deepfake Ad VideoWoman Finds Herself Talking In Creepy Deepfake Ad, Warns People About The Dangers Of AIWoman Shares How She Found Her Image Being Used In An Ad, Realizes Company Made A Deepfake Of HerWoman Is Horrified After Sensitive Video Of Her Is Used To Make An AI Deepfake For AdvertisingWoman Returns From Her Honeymoon To Find A Creepy AI Deepfake Video Of Her In An Ad Woman Returns From Her Honeymoon To Find A Creepy AI Deepfake Video Of Her In An Ad
ADVERTISEMENT

Artificial intelligence (AI) is slowly gaining traction and becoming a part of our everyday lives. But the world has mixed feelings about it. Some see it as a positive driving force for innovation and creativity, while others worry about it being used to spread fake news or misinformation.

One of the biggest concerns about AI is the creation and use of deepfakes. Although they can be made for entertainment, they’re sometimes used for more sinister purposes, and this woman became the victim of one such deepfake ad.

More info: TikTok

Woman shares worrying account of how her likeness was used without her consent in a deepfake ad to sell pills to cure impotence 

Image credits: michel.c.janse

Michel Janse, a 27-year-old YouTuber and TikToker, shared the harrowing account of finding a deepfake ad of herself online

She told her followers that the incident happened while she was on her honeymoon. Concerned friends and family probably came across the ad and shared it with her. The company that was promoting their pills had pulled her image from one of the videos on her YouTube channel. She also said, “this ad was me in my bedroom, in my old apartment in Austin, wearing my clothes, talking about their pill. The only thing is, it wasn’t my voice.”

The idea behind showing the clip of the AI video was to inform her followers about the dangers of deepfakes and how “we need to question everything we see.” As Michel pointed out, “someone that you know could be in a video saying something to you, looks exactly like them, and it could be completely fabricated.” It goes to show that we need to start questioning the content we see online because it could be completely fake.

ADVERTISEMENT

Image credits: michel.c.janse

“The internet is changing fast, I guess, trust no one, believe nothing on the internet, it’s just a motto to live by for a while”

It’s difficult to know how to deal with unethical deepfakes like this one, especially if they’ve used your image without your consent. To get an expert opinion, Bored Panda reached out to Professor Siwei Lyu, Ph.D. He is a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering at the University at Buffalo. Dr. Lyu’s areas of interest include digital media forensics, computer vision, and machine learning. 

He explained the steps Michel could take to deal with the ad. Dr. Lyu said, first, one must “ensure that all evidence is thoroughly documented, including time stamps, and securely stored. Notify social media platforms and request the removal of the deepfake content. If [it is] is of a criminal nature (e.g., involving explicit content or causing financial harm), consider pursuing legal action and report the incident to federal or local law enforcement agencies.”

ADVERTISEMENT

Image credits: Matheus Bertelli (not the actual photo)

Hear the full story from Michel’s perspective

@michel.c.jansestorytime: AI stole my likeness and created a deepfake of me ✌🏼😅 believe nothing 🫡♬ original sound – Michel Janse

AI has reached a whole new level of sophistication, and it’s become harder to figure out what’s real

Artificial intelligence and deep learning algorithms are used to create deepfakes, which often occur in the form of videos, audio, or images. These algorithms are so advanced that they can change or replace existing content seamlessly. They’re so well done in today’s day and age that a study found that only 46% of adults could tell the difference between real and AI-generated content.

Research even states that we can expect to see around 8 million digitally manipulated videos like this posted online by 2025. If Michel had not shared her story, people would have believed that she had actually starred in that ad and endorsed those pills. But luckily, as soon as she learned about the video, she decided to inform people about it.

Professor Lyu explained how someone could figure out if a video they came across was AI-generated. He said, “the person could know whether it was them who made the specific video recording. If they can find the original video that was used to make the deepfake video, then it is more strong evidence. For someone who is not the subject or knows the subject, it is generally difficult to spot the deepfake.”

ADVERTISEMENT

“One can notice some artifacts in deepfake videos, especially those made with software and did not undergo manual cleanup operations. This may include blurry mouth regions or lack of synchronization between mouth movement and voices (signs of the video being a lip-syncing one). One can also use available deepfake detection tools (Reality Defender, Deepware scanner, etc),” he mentioned.

Image credits: katemangostar (not the actual photo)

As Michel stated, there isn’t exactly a guidebook for us to figure out what’s fact and what’s fiction. In 2018, researchers found that deepfakes don’t normally blink. But, just as soon as that research had been published, many digitally altered videos popped up with characters that could blink. It shows just how fast AI can learn from its mistakes.

Poor-quality AI videos are easier to spot. Just as Dr. Lyu mentioned, there might be bad lip-syncing, blurriness around the person’s face, or general inconsistencies in their movement. Sometimes, the person’s skin tone may also appear patchy, their teeth or hands might be badly rendered, and even weird lighting effects can be observed.

ADVERTISEMENT

The problem is people consume massive amounts of content every day. Not everyone has the patience to stop and figure out if a video has been digitally altered. They might just believe whatever they’re shown. That’s why there should be sanctions placed on companies to protect folks from such media. 

Dr. Lyu stated: “First, companies offering generative AI technology should integrate provenance features, such as watermarking, into their tools. This will allow media created using their technology to be reliably traced back to the source.” 

“Second, these companies must ensure that deepfakes are not created using a person’s likeness or voice without their explicit consent. The original subject must agree to both the use of their identity and the specific message the deepfake is intended to convey,” he added.

Until proper laws are created to protect people from digitally manipulated videos like this, we must take matters into our own hands. Raise the alarm if you come across unethical deepfakes, and always remember… constant vigilance!

Netizens were worried after listening to Michel’s story, and many urged her to sue the company that made the video

ADVERTISEMENT
ADVERTISEMENT

Image credits: ThisIsEngineering (not the actual photo)

ADVERTISEMENT
ADVERTISEMENT
Ic_polls

Poll Question

Thanks! Check out the results:

Share on Facebook
Beverly Noronha

Beverly Noronha

Writer, BoredPanda staff

Read more »

You can call me Bev! I'm a world-class reader, a quirky writer, and a gardener who paints. If you’re looking for information about tattoos, Bulbasaur, and books, then I'm the NPC you must approach.

Read less »
Beverly Noronha

Beverly Noronha

Writer, BoredPanda staff

You can call me Bev! I'm a world-class reader, a quirky writer, and a gardener who paints. If you’re looking for information about tattoos, Bulbasaur, and books, then I'm the NPC you must approach.

Denis Krotovas

Denis Krotovas

Author, BoredPanda staff

Read more »

I am a Visual Editor at Bored Panda. While studying at Vilnius Tech University, I learned how to use Photoshop and decided to continue mastering it at Bored Panda. I am interested in learning UI/UX design and creating unique designs for apps, games and websites. On my spare time, I enjoy playing video and board games, watching TV shows and movies and reading funny posts on the internet.

Read less »

Denis Krotovas

Denis Krotovas

Author, BoredPanda staff

I am a Visual Editor at Bored Panda. While studying at Vilnius Tech University, I learned how to use Photoshop and decided to continue mastering it at Bored Panda. I am interested in learning UI/UX design and creating unique designs for apps, games and websites. On my spare time, I enjoy playing video and board games, watching TV shows and movies and reading funny posts on the internet.

What actions do you think she should take regarding the deepfake ad?
Add photo comments
POST
tabbygirl04152020 avatar
Tabitha
Community Member
1 month ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

We need to legislate AI with very tough and prohibitively expensive penalties including prison time, and we need to do it NOW, before this b******t becomes all too common. AI could be a good thing, but it definitely won't be if it's not tightly controlled, and allowed to fall into the wrong hands. This is just an ad using this woman's appearance. Imagine worse---as well as the worst--things AI could be used for, and by the worst of people. It could ruin people's lives. It could pit countries against each other that actually have no disagreements, except the fictional one created by, and presented with, AI generated images, and start unnecessary wars between them. No, this has to be nipped in the bud before it gets out of hand.

arkangl60 avatar
Gabby M
Community Member
1 month ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

“You are young yet, my friend,” replied my host, “but the time will arrive when you will learn to judge for yourself of what is going on in the world, without trusting to the gossip of others. Believe nothing you hear, and only one half that you see." Poe Time has come where we can't believe anything anymore. Thank you, Tech Gen.

Load More Comments
tabbygirl04152020 avatar
Tabitha
Community Member
1 month ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

We need to legislate AI with very tough and prohibitively expensive penalties including prison time, and we need to do it NOW, before this b******t becomes all too common. AI could be a good thing, but it definitely won't be if it's not tightly controlled, and allowed to fall into the wrong hands. This is just an ad using this woman's appearance. Imagine worse---as well as the worst--things AI could be used for, and by the worst of people. It could ruin people's lives. It could pit countries against each other that actually have no disagreements, except the fictional one created by, and presented with, AI generated images, and start unnecessary wars between them. No, this has to be nipped in the bud before it gets out of hand.

arkangl60 avatar
Gabby M
Community Member
1 month ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

“You are young yet, my friend,” replied my host, “but the time will arrive when you will learn to judge for yourself of what is going on in the world, without trusting to the gossip of others. Believe nothing you hear, and only one half that you see." Poe Time has come where we can't believe anything anymore. Thank you, Tech Gen.

Load More Comments
Related on Bored Panda
Related on Bored Panda
Trending on Bored Panda
Also on Bored Panda