top of page

S.O.R.A.- Soon OpenAI Replaces All

Updated: May 8, 2024

On February 15th, OpenAI revealed its newest model, Sora. Until then, ChatGPT, DALL-E, Firefly, and all other generative AI platforms I had seen gave me an initial sense of excitement. It wasn’t until reflecting or hearing others' views on the models that I would get a more level-headed attitude toward them. When I saw Sora, I was immediately filled with dread.


Sora

Sora is Japanese for sky because of the way the model, “evokes the idea of limitless creative potential,” according to OpenAI Research Leads Tim Brooks and Bill Peebles. The acronym I used was just to sum up my initial reaction to it. According to OpenAI’s website, Sora can currently generate videos up to a minute long for a single prompt. It currently has some trouble with physics, object permanence, and objects morphing in and out of each other. At close inspection, most of the videos have obvious signs of being AI-generated, but some don’t. It’s only a matter of time before most don’t. Then none.

The model is currently being tested by red teamers, people who try and get the model to break its restrictions, to ensure its safety. OpenAI’s usage policies prohibit generating extreme violence, sexual content, hateful imagery, celebrity likenesses, and the IP of others. Watermarks and C2PA metadata are also attached to videos generated by the platform both of which can easily be removed.


US$25.6 Million Heist

The South China Morning Post reported a HK$200 million scam performed on the Hong Kong branch of a multinational company. An employee in the finance department received a phishing email where they were invited to a group video conference. In the call was the company’s CFO as well as other staff who instructed the employee to make 15 transfers with the company’s Faster Payment System. The scam was realized a week later. All images and voices of official personnel had been generated with deepfake technology.

OpenAI doesn’t have a deepfake model and this incident predates Sora’s announcement by weeks. I only bring it up to point out how AI’s ability to confuse people’s perception of reality can cause harm. This isn’t a one-off incident. Reports of deepfake voice fraud where people answer the phone to what sounds like their loved ones have been reported since mid-2023.


Replacing All

This goes beyond taking away jobs from copywriters and artists. Artificial Intelligence has infiltrated our perception of the world. It is only going to become better at generating and replacing reality. The world is about to be subjective. Anything seen through a screen potentially didn’t happen but also could have. History can be remembered any way we want by AI generating or editing recordings of events. It doesn’t matter if something happened or not. What matters is who believes it happened.

Generative AI is accelerating the change we have seen over the years of the information age transforming into the misinformation age. How long until AI-generated videos are used to bring out the worst of human behavior?

A video of an authoritative group is shown abusing their power over others. Violence ensues as retaliation for the injustice. Maybe people die. It isn’t until after the dust settles when reviewing the chain of events, that the public finds out that the inciting incident didn’t happen. Some are angry at how easily they were duped. Some are angry that people are covering up what happened by claiming it was AI-generated. Reality becomes an opinion.


The Next Step to _________

OpenAI calls Sora, “…an important milestone for achieving AGI.” It’s also a wake-up call to those who have been ignoring the rapid development of AI. Wherever AI is headed, it is going there fast. OpenAI admits that even with all the research and testing it does, it cannot predict the ways people will abuse it. If there was ever a time to take a step back and evaluate the potential impacts of generative AI, this would be a pretty good time. Having some regulation or at the very least transparency when it comes to AI development could help prevent absolute chaos. This seeming pattern of unchecked progress for the sake of progress cannot go on.


HAL: It can only be attributable to human error.

Hal, Skynet, Ultron. Most imagined civilization being threatened by an AGI hell-bent on irradicating the human populace. Before overcoming that threat, we need to face the one right around the corner. We might destroy ourselves if we are not extremely cautious with this technology.

But hey, it'll probably sort itself out. Can’t wait to see Nvidia’s 5090 reveal where it boasts the ability to incite global conflicts and achieve upwards of 200fps at 4k in Skull & Bones due to the increased number of Tensor Cores.




I don’t want to become the cliché of being a blogger who spends their days writing about topics that are just laundry lists of grievances they have because the world isn’t exactly the way they want it. I have made a conscious effort to avoid complaining, attempting to have an overall optimistic view of everything I talk about. I don’t know if it’s because the books I have been reading have a very critical take on technology, AI advancements are making so many people talk about doomsday, or somebody is just pissing in my cereal, but I find my writing backlog filled with more negative takes than positive.

While I take the time to think about blog post ideas that are more half-glass full, I am going to get all the negative ones out of my backlog. This post, one about the Curio AI Toy: Grok, dark patterns, and one touching the environmental impact of tech.

Related Posts

See All
The Coming Wave

The challenge of “The Coming Wave” is containing technology.

 
 
 

Comments


Want to chat or challenge me to a duel? 

Email Me:

No AI was used  to generate text on this site in order to preserve authenticity and voice.

  • LinkedIn
  • YouTube
bottom of page