It’s fascinating to live in a time when deepfakes and AI-generated content are nearly indistinguishable from reality. Recently, Google and Meta have made remarkable advancements in the AI field, and these developments are starting to impact a broader audience beyond just the elderly. North Carolina reality star Huda Mustafa, who is currently featured on Love Island, has become the most recent victim of deepfake virality. This deepfake has gained such widespread attention that it has deceived many people into purchasing a product falsely claimed to be linked to the star.
The video in question features Huda speaking to the camera and discussing a perfume she claims to wear on the show. This moment was highlighted by her co-star Jeremiah Brown, making it quite popular among viewers. Huda mentions that the perfume is called “Angham” by Lattafa, and this statement is followed by a brief clip of her reiterating the same point during the show. Additionally, there’s a link to a shop where viewers can purchase the perfume, which has led some TikTok users to buy it, including the woman featured in the video.
However, the video of Huda speaking to the camera is fake, as evidenced by various aspects of it, including her voice not matching her lips, the way her eyes don’t align, and other, less obvious details. Even the clip of her saying the statement on the show is also fake, and can be attributed to some clever video editing. Considering how it easy it is to manipulate a voice, especially someone who speaks as much as Huda does on Love Island, it doesn’t surprise me that people fell for the deepfake.
Various commenters were not as forgiving, with one person saying, “You actually fell for the ai 💔 I feel so smart.” Other people were more focused on the consumer irony of the situation. Concerning this, one person said, “See this is the problem. It’s not even the AI. It’s the people that will buy anything just because somebody famous says to buy it.”
Unfortunately, artificial intelligence is only going to get smarter as more and more information is fed into models. Hopefully, there will be some sort of crackdown on how AI is used, though I’m unsure of what could possibly be done to see that something like that is implemented.