
In short
- Viral AI videos replace a creator’s face and body with Stranger Things actors, racking up more than 14 million views.
- Researchers say full-body deepfakes remove the visual cues used to detect previous facial manipulation.
- Experts warn that these same tools could fuel scams, disinformation and other abuses as access increases.
A viral post featuring a video allegedly shot with Kling AI’s 2.6 Motion Control took social media by storm this week, when a clip from Brazilian content creator Eder Xavier showed him flawlessly swapping his face and body with those of Stranger Things actors Millie Bobby Brown, David Harbor and Finn Wolfhard.
The videos have been widely distributed across social platforms and have been viewed more than 14 million times on X, with additional versions having been posted since. The clips have also caught the attention of technologists, including a16z partner Justine Moore, who shared the video from Xavier’s Instagram account.
“We are not prepared for how quickly manufacturing pipelines will change with AI,” Moore wrote. “Some of the latest video models have immediate implications for Hollywood. Endless character changes at negligible cost.”
As image and video generation tools continue to improve, with newer models like Kling, Google’s Veo 3.1 and Nano Banana, FaceFusion and OpenAI’s Sora 2 expanding access to high-quality synthetic media, researchers warn that the techniques seen in the viral clips are likely to quickly spread beyond isolated demos.
An inclined plane
While viewers were amazed by the quality of the body swapping videos, experts warned that it was bound to become a tool for impersonation.
“The floodgates are open. It has never been easier to steal an individual’s digital likeness (their voice, their face) and now bring it to life with a single image. No one is safe,” said Emmanuelle Saliba, Chief Investigative Officer at cybersecurity firm GetReal Security. Declutter.
“We will see systemic abuse on every scale, from one-on-one social engineering to coordinated disinformation campaigns and direct attacks on critical businesses and institutions,” she said.
According to Saliba, the viral videos featuring Stranger Things actors show how thin the guardrails around abuse currently are.
“For a few dollars, anyone can now create full-length videos of a politician, celebrity, CEO or private individual using a single image,” she said. “There is no standard protection of a person’s digital likeness. No guarantee of identity.”
For Yu Chen, professor of electrical and computer engineering at Binghamton University, switching characters across the body goes beyond the face-only manipulation used in previous deepfake tools and introduces new challenges.
“Switching characters across the body represents a significant escalation in the capabilities of synthetic media,” Chen said Declutter. “These systems must simultaneously handle pose estimation, skeletal tracking, clothing and texture transfer, and natural motion synthesis for the entire human form.”
Along with Stranger Things, the makers also posted videos of the body-swapped Leonard DiCaprio from the film The Wolf of Wall Street.
We’re not ready.
AI just redefined deep fakes and character swaps.
And it’s very easy to do.
Wild examples. Bookmark this.
[🎞️JulianoMass on IG]pic.twitter.com/fYvrnZTGL3
— Min Choi (@minchoi) January 15, 2026
“Previous deepfake technologies operated primarily within a limited manipulation space, focusing on replacing the facial area while leaving the rest of the frame largely untouched,” Chen said. “Detection methods could take advantage of inconsistencies in the boundaries between the synthetic face and the original body, as well as temporary artifacts when head movements did not naturally align with body movement.”
Chen continued: “While financial fraud and impersonation remain concerning, several other vectors of abuse deserve attention,” Chen said. “Non-consensual intimate images represent the most direct vector of harm, as these tools lower the technical barrier to creating synthetic explicit content featuring real individuals.”
Other threats both Saliba and Chen point to include political disinformation and corporate espionage, where scammers pose as employees or CEOs, release fabricated “leaked” clips, bypass controls and collect credentials through attacks where “a credible individual on video removes suspicion long enough to gain access to a critical company,” Saliba said.
It’s unclear how studios or the actors portrayed in the videos will respond, but Chen said developers play a crucial role in implementing safety measures because the clips rely on publicly available AI models.
Still, he said, the responsibility must be shared between platforms, policymakers and end users, as placing it solely on developers could prove unworkable and hinder useful applications.
As these tools proliferate, Chen says researchers should prioritize detection models that identify intrinsic statistical signatures of synthetic content, rather than relying on easily removed metadata.
“Platforms should invest in both automated detection pipelines and human review capabilities, while developing clear escalation procedures for high-stakes content involving public figures or potential fraud,” he said, adding that policymakers should focus on establishing clear accountability frameworks and imposing disclosure requirements.
“The rapid democratization of these capabilities means that the response frameworks developed today will be widely tested within months, not years,” Chen said.
Daily debriefing Newsletter
Start every day with today’s top news stories, plus original articles, a podcast, videos and more.

