ChatGPT Sora: from text prompts to high-def video

ChatGPT Sora: from text prompts to high-def video
Micky Weis
Micky Weis

15 years of experience in online marketing. Former CMO at, among others, Firtal Web A/S. Blogger about marketing and the things I’ve experienced along the way. Follow me on LinkedIn for daily updates.

In today’s post, we’ll take a closer look at ChatGPT Sora – an AI model rumored to generate videos from text prompts.

Artificial Intelligence (AI) has undeniably taken the world by storm and established itself as a fundamental part of our shared future.

Personally, I find the growing integration of AI into our daily lives fascinating. Among these advancements is ChatGPT, which, with its regular updates and extensions, has become an essential tool in many people’s work lives.

This post will focus on one such extension: ChatGPT Sora – the AI model that reportedly can generate videos from text.

Take a seat in the director’s chair

In February, AI company OpenAI (the creators of ChatGPT) unveiled a new extension to their popular AI language model.

This new model can reportedly generate video clips and elements based on text-based prompts.

The concept is similar to what we’ve seen with AI models for image creation, such as Midjourney5 and Adobe Photoshop FireFly AI, which allow users to create or modify images through text descriptions.

From simple prompts to realistic videos

Just as ChatGPT can provide meaningful answers to a wide range of questions, OpenAI claims that Sora will be capable of creating realistic videos from various prompts.

Sora will have knowledge about real-world details like landscape elements, realistic movements within the surrounding environment, different camera settings, video formats, and more.

I must admit, I’m genuinely impressed by OpenAI’s efforts and can’t wait to try it out when Sora is finally launched.

Sora is still in the testing phase

Since announcing ChatGPT Sora in February, OpenAI has been transparent about the process. It wasn’t an official launch, as Sora is still in the development phase.

OpenAI is working closely with security experts to ensure the model doesn’t produce content that violates their guidelines.

They are also collaborating with designers and video creators both internally and externally to gather valuable feedback for further development.

User involvement – part of OpenAI’s DNA

As Sora is still in the testing phase, OpenAI admits there are still errors in the video results from text-based prompts.

For instance, errors may occur in more complex prompts that require precise camera settings or involve larger scene setups over extended periods.

Once Sora is officially launched, many of these issues will likely be addressed, but as with other OpenAI models, development will continue even after release, with feedback from general users playing a key role.

In my view, this is part of OpenAI’s DNA—allowing space for the public to contribute to the fine-tuning of the models and programs they release.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *