LOOKS SO REAL

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn

The exciting possibilities of OpenAI’s text-to-video AI model, Sora.


Just a year ago, an AI-generated video featuring a grotesquely deformed approximation of Will Smith enthusiastically shovelling spaghetti into his mouth made waves across the internet. Viewers had a good laugh at the horrific yet hilarious production quality of such videos. At the time, AI-generated content was largely confined within such levels of novelty and amusement. They served as amusing diversions rather than serious artistic endeavours.

That has changed. Just a year later, OpenAI released Sora, a generative AI model that builds video footage out of text prompts. Much like its predecessor DALL-E, which adeptly crafts images based on textual instructions, Sora takes the concept a step further by venturing into the realm of moving pictures. With a brief description or an image as inspiration, Sora can generate cinematic scenes in stunning 1080p resolution, diverse characters, and dynamic motion. From a simple phrase like “a cosy cafe on a rainy day” to more elaborate scenarios such as “a futuristic metropolis with flying cars and neon-lit skyscrapers,” Sora can create a myriad of visual narratives with astonishing accuracy, detail and even apparent believability. Well, mostly.

Upon its initial release, Sora managed to turn the heads of enthusiasts and sceptics alike. Sam Altman, OpenAI’s chief executive officer, took to social media to showcase Sora’s capabilities by fulfilling user requests and tweeting out the astounding results. From soaring landscapes to intricate character interactions, Sora’s creations left viewers amazed, offering a glimpse into the future of video production. If you’re wondering how you can get your hands on the model, you can’t, at least not just yet. Aside from its developers, Sora is currently available only to a select group of testers known as “red teamers,” who are rigorously evaluating its capabilities and limitations.

As with many AI-generated models, Sora is not without its flaws. At first glance, the generated videos are nothing short of mind-blowing. Sora understands not only the semantics of the prompts, but also the nuances of visual storytelling, incorporating reflections, textures, and even physics simulations to craft immersive visual experiences. However, pixel peeping reveals inconsistencies quite frequently, betraying its synthetic origins. AI weirdness can creep into many clips, like cars driving in a certain direction, only to suddenly reverse, or a person walking the wrong way on a treadmill. Sora still needs to work on accurately simulating the physics of a complex scene, and may need to help understand specific instances of cause and effect in specific contexts.

Sora ushers in a transformative era for industries reliant on visual content, particularly stock footage. Historically, acquiring customised footage for specialised projects required exorbitant costs, extensive planning, and significant resources. Take, for instance, the scenario of capturing footage of a stroll through an ancient city in the 1900s. Such a production would necessitate meticulous set construction, hiring of actors, costume design, elaborate shooting schedules, and meticulous post-production editing. Now? It’s a prompt (or a few) away.

Sora is going to empower users to realise their creative visions with unparalleled ease and affordability. This democratisation of video production not only benefits content creators but also unlocks a wealth of opportunities for virtually every industry. Stock footage users, in particular, stand to gain immensely from Sora’s capabilities. With Sora, users have access to what is functionally a vast repository of high-quality, tailor-made footage to suit their specific needs. This largely eliminates the need for costly productions and streamlines the content creation process.

However, this also raises important questions about the future of traditional roles in videography and content creation and other ethical and societal concerns. While the technological advancements facilitated by Sora offer unparalleled opportunities for creativity and efficiency, they also serve as a double-edged sword by amplifying the potential for misinformation, manipulation, and exploitation.

One of the most pressing concerns surrounding Sora’s capabilities is its potential impact during sensitive periods, such as state elections. The ability to seamlessly fabricate realistic video content from text prompts could be exploited to propagate false narratives, manipulate public perception, and undermine the integrity of democratic processes. Moreover, Sora’s possible integration with existing deepfake technology can lead to inaccuracies in replicated footage and the creation of non-consensual pornography. Deepfakes already have the potential to inflict irreparable harm on individuals and society at large, and Sora’s AI algorithms can easily blur the lines between reality and fiction, taking the potential of technological abuse even further. All of this undermines trust in the truthfulness of digital media and makes the ongoing war against misinformation a thousand times more difficult.

The unauthorised use of individuals’ likenesses in deepfake videos not only violates personal privacy, consent and intellectual property rights but also has far-reaching implications for reputation management and identity protection. The unauthorised reproduction of copyrighted material also poses significant challenges for content creators and rights-holders, potentially depriving them of rightful compensation and recognition for their work.

Mira Murati, OpenAI’s Chief Technology Officer, hinted at exciting updates for Sora, suggesting that audio integration is on the horizon, along with public access. This move reflects ongoing efforts to fine-tune the AI model and the addition of sound promises to take the Sora experience to new heights. However, acknowledging the need for content editing highlights the challenges of AI-generated media. Despite impressive advancements, these systems aren’t flawless. With the already mentioned inconsistencies popping up from time to time, allowing users to edit the content ensures quality is crucial. If Sora can offer flexibility in editing it would empower creators to cater to diverse preferences, ensuring a seamless experience for all involved.

Looking ahead, the future of AI-generated content raises intriguing questions about creativity and innovation in machine-generated media. Can Sora and its counterparts exceed human creativity? Are they capable of capturing the essence of human expression and storytelling? These existential queries prompt us to ponder the evolving relationship between technology and creativity, pushing the boundaries of what’s achievable in artificial intelligence.

This is just the beginning. In this theatre of innovation, where pixels paint narratives and algorithms weave tales, we find ourselves at the crossroads of imagination and computation.

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn
On Key

Related Posts

TECH TAKING OFF

Tech careers are set to reshape Bangladesh’s future, emerging as the driving force behind the nation’s next big breakthrough.   As Bangladesh steps into the

AN EDUCATOR AT HEART

Munia Islam Mozumder is the CEO of Teach for Bangladesh (TFB). She is a TFB alumna and has worked with prominent INGOs and humanitarian organisations.

WALLS OR WATERWAYS?

Understanding the differences between the cordon and open approach to flood control.   Bangladesh, a deltaic nation nestled between the Ganges, Brahmaputra, and Meghna rivers,

Leave a Reply

Your email address will not be published.