Belief can evaporate immediately when know-how will get mischievous. That’s the most recent within the wild world of AI, the place scammers are utilizing deepfake movies of the late Dr. Michael Mosley—as soon as a trusted face in well being broadcasting—to hawk dietary supplements like ashwagandha and beetroot gummies.
These clips seem on social media, that includes Mosley passionately advising viewers with bogus claims about menopause, irritation, and different well being fads—none of which he ever endorsed.
When Acquainted Faces Promote Fiction
Scrolling by Instagram or TikTok, you would possibly journey over a video and suppose, “Wait—is that Mosley?” And also you’d be proper… form of. These AI creations use clips from well-known podcasts and appearances, pieced collectively to imitate his tone, expressions, and hesitations.
It’s eerily convincing till you pause to suppose: maintain on—he handed away final yr.
A researcher from the Turing Institute warned the developments are taking place so quick that it’ll quickly be practically unattainable to identify actual from faux content material by sight alone.
The Fallout: Well being Misinformation in Overdrive
Right here’s the place issues get sticky. These deepfake movies aren’t innocent illusions. They push unverified claims—like beetroot gummies curing aneurysms, or moringa balancing hormones—that stray dangerously from actuality.
A dietitian warned that such sensational content material significantly undercuts public understanding of diet. Dietary supplements are not any shortcut, and exaggerations like these breed confusion, not wellness.The UK’s medication regulator, MHRA, is trying into these claims, whereas public well being consultants proceed urging folks to depend on credible sources—suppose NHS and your GP—not slick AI promotions.
Platforms within the Scorching Seat
Social media platforms have discovered themselves within the crosshairs. Regardless of insurance policies towards misleading content material, consultants say tech giants like Meta wrestle to maintain up with the sheer quantity and virality of those deepfakes.
Below the UK’s On-line Security Act, platforms at the moment are legally required to sort out unlawful content material, together with fraud and impersonation. Ofcom is keeping track of enforcement, however to this point, the dangerous content material usually reappears as quick because it’s taken down.
Echoes of Actual-Faux: A Worrying Pattern
This isn’t an remoted hiccup—it’s a part of a rising sample. A latest CBS Information report revealed dozens of deepfake movies impersonating actual docs giving medical recommendation worldwide, reaching tens of millions of viewers.
In a single instance, a doctor found a deepfake push for a product he by no means endorsed—and the resemblance was chilling. Viewers had been fooled, feedback rolled in praising the physician—all based mostly on a fabrication.
My Take: When Expertise Misleads
What hits me hardest about this isn’t simply that tech can imitate actuality—it’s that folks imagine it. We’ve constructed our belief on consultants, voices that sound calm and educated. When that belief is weaponized, it chips away on the very basis of science communication.
The actual battle right here isn’t simply detecting AI—it’s rebuilding belief. Platforms want extra sturdy checks, clear labels, and perhaps—simply perhaps—a actuality test from customers earlier than hitting “Share.”