TRENDING

With Google’s Veo 3, a New Risk Is Emerging: You Can No Longer Tell if a Video Is Real or Not

Veo 3 signifies the end of video as a reliable form of proof. AI allows users to both create and deny realities, undermining society’s fundamental reliance on visual evidence.

Veo 3 image
No comments Twitter Flipboard E-mail
javier-lacort

Javier Lacort

Senior Writer
  • Adapted by:

  • Alba Mora

javier-lacort

Javier Lacort

Senior Writer

I write long-form content at Xataka about the intersection between technology, business and society. I also host the daily Spanish podcast Loop infinito (Infinite Loop), where we analyze Apple news and put it into perspective.

214 publications by Javier Lacort
alba-mora

Alba Mora

Writer

An established tech journalist, I entered the world of consumer tech by chance in 2018. In my writing and translating career, I've also covered a diverse range of topics, including entertainment, travel, science, and the economy.

1583 publications by Alba Mora

In 2018, when the first deepfakes began to circulate, many concluded that it was the beginning of the end for the credibility of video as evidence. They were mistaken. Instead, it served as a warning that the end was approaching.

Today, with Google’s Veo 3, that moment has arrived. Clips on platforms like Reddit and X are virtually indistinguishable from real footage. There are no bizarre features, such as six-fingered hands or unsettling faces. These videos can pass any visual scrutiny. What if they don’t meet that standard yet? They soon will, especially in a world where many consume video on small, lower-resolution mobile screens.

The End of Video as Irrefutable Evidence

For decades, video has been regarded as the gold standard of evidence. The phrase “I saw it with my own eyes” carried a weight of total certainty. A single recording was often enough to topple governments or prove someone’s innocence. That foundational belief in the reliability of video is fading before our very eyes.

Veo 3 users are sharing fake news clips depicting disasters, deceased politicians, and violence that never actually occurred. These clips feature synchronized dialogue, realistic effects, and believable physics. The model even generates conversations that weren’t part of the original prompt, as if it possesses its own editorial judgment. This is AI with a narrative instinct.

The main issue isn’t just our ability to create convincing fake videos. It’s that we’re running out of ways to distinguish real videos from synthetic ones without advanced technical tools. The world is entering an era of permanent visual uncertainty, where every video raises the question, “Did this really happen?”

The Perfect Alibi for Denying Reality

For example, when a video recently surfaced showing French President Emmanuel Macron being pushed by his wife as he got off a plane, the Elysée Palace initially claimed the video wasn’t real and was produced using AI. Although Macron eventually had to admit the video was authentic, the damage had already been done. French authorities had discovered the ideal excuse.

If any video can potentially be fake, then any uncomfortable video can be dismissed simply by claiming it is artificial. AI now serves as a universal excuse. Whether it’s a politician caught in a scandal, a company violating rights, or a regime documenting repression, anyone can leverage the most effective form of reasonable doubt.

There’s no need to prove that a video is fake. It’s enough to create suspicion. In a world where falsification is technically feasible, the very possibility of fake content becomes a persuasive argument.

Ironically, a technology that allows society to create perfect fiction also enables it to deny perfectly documented reality.

Learning to Live Without Visual Certainties

Google’s safeguards are selective. While you can’t generate a video of former President Joe Biden falling down, you can create content depicting natural disasters and urban violence. The company protects users from the obvious, but not from the more subtle issues.

If every video can be manipulated, what happens to a society that relies on audiovisual content to understand the world? How do we assess credibility, culpability, or legitimacy when any evidence can be fabricated in a matter of minutes?

The solution can’t be purely technical. Society needs media literacy that starts with the assumption of falsifiability. More importantly, we must accept that we have permanently lost one of our most basic tools for distinguishing fact from fiction.

Video as evidence is effectively dead, or it soon will be. You must learn to navigate a world where seeing is no longer believing.

Image | Google

Related | ChatGPT Studio Ghibli Photos: Here Are Some Free Alternatives to Create Animated Images From Your Photos

Home o Index