AI video is going mainstream
It seems like every week we could write a “next big thing in AI is here” story, but this one really could change everything about content and how we experience the internet.
We all know that ChatGPT can produce text responses, and through its integration with DALL-E it can also create images. Now OpenAI has officially launched a video generator called Sora that also works directly from your ChatGPT interface.
What’s most surprising about Sora is not just that it can produce just about anything you can imagine as a short video, but how incredibly realistic those videos are.
Super high quality video isn’t limited to OpenAI, so this week we’re bringing you a breakdown two of the most accessible tools, and how you can start experimenting with them right away:
OpenAI’s Sora
This is the one everyone’s been talking about this week. OpenAI first released a preview of Sora nearly a year ago, but now we can open it up and see what it can actually do.
Sora can generate videos up to 1080p resolution; up to 20 seconds long; and in widescreen, vertical, or square aspect ratios. One of the most interesting features is that you’re not limited to text prompts, but you can bring your own assets to extend, remix, and blend, which allows us all to bring still images to life or extend/edit existing videos.
Note: Right now it is pretty strict about generating video from files that include people or any copyrighted materials.
Maybe my favourite feature is something they call Storyboard. You can think of it like a planning tool where you lay out multiple short videos or frames next to each other, and not only will Sora stitch them together to create one longer video, it will transition between them, effectively morphing one video scene into the next.
Meta AI
Probably the most underrated AI tool is the one offered by Meta. They introduced their video generator back in October, and while it’s not quite as visually powerful as Sora today, it’s super easy to use and comes with a lot of additional features.
Meta AI operates a lot like ChatGPT, with a simple text prompt window where you can tell it to generate text, images, or video.
The biggest difference with Meta’s tool is how much it’s embedded into all of its social platforms. It appears pretty much anywhere that you interact with Instagram, Facebook, Messenger, or WhatsApp, and you can post or share your creations directly from Meta AI to any of their apps.
So What?
Should we fire our creative teams and go all-in on AI-generated video? Of course not. However, there are a few significant implications we should all be aware of:
- You may remember that a few weeks back we introduced you to the term “AI Slop.” That’s about to get much worse.
In a world where anyone can create video content about anything for free, we are bound to have our newsfeeds inundated with a combination of garbage, spam, and misinformation. That’s why I believe it’s so important that we all educate ourselves about AI content, and learn what it’s capable of so we can identify it when we see it. - Creative planning just got a whole lot easier. Imagine being able to produce concepts and ideas into video formats faster than we could write out a brief. Now any idea we have can be turned into a rough video draft with just a few keystrokes.
- Humanized content is going to be more valuable than ever. When we’re faced with a flood of AI-generated content, it will be the authentic, relatable content that grabs people’s attention. Smart brands will use AI tools to generate ideas, plan content, and even support their ideas with B-Roll and editing tools, but real human connection will become a premium product that AI can’t replicate (yet).