Here’s how AI is transforming the post-production industry

0

In 2016, we wondered if artificial intelligence was about to take over the media. Our utopian view was that AI provided the opportunity to take the burden off our hands, leaving humans find more time to be creative. The dystopian? One word: Skynet.

Five years later, and in post-production at least, our utopian theory seems to be winning. From AI-driven notation, music notation and video scaling software to a new deep learning engine that can take a 2D image and turn it into an animable 3D model, there are many ways for machines to improve post-production pipelines.

But what are the best ways AI is helping release teams today, and how might AI tools shape workflows for the future? We explored AI post-production software from Adobe, Avid, Colourlab.ai, NVIDIA, EditShare, CrumplePop, Topaz Labs and more to find out.

From 700 clicks to 70

“All the current trends in AI are exciting. Some of them are exciting in exciting ways – like standing on the edge of a cliff, kind of exciting, ”Page begins.

As Director of Advanced Media and Entertainment Technology at NVIDIA, Page is responsible for developing SDKs that other companies and studios can use to add AI functionality to their workflows.

For him, one of the great values ​​of AI – especially in the post – lies in its ability to automate traditionally repetitive tasks like metadata markup, closed captioning or even rotoscoping. “These days, AI makes sure that instead of going through 700 mouse clicks a day, you only have to go through 70,” he continues. “It gives you time to do something more meaningful.

Today, tools like EditShare’s EFS and Avid Media Composer have integrated cloud-based AI services like AWS and MediaCentral to automatically tag shots with metadata, based on objects and people detected. inside. Each clip is automatically organized, so assistants and editors can quickly find the shots they need, giving them more time to focus on telling the story.

And if facial recognition isn’t enough, machine learning also helps organize and search clips through dialogue.

For Media Composer editors, Avid PhraseFind automatically analyzes all clips in the project, then phonetically indexes the audible dialogue. Meanwhile, Avid ScriptSync not only indexes all of the text and audible dialogue in your project, but also syncs each source clip to its associated row in a movie or TV script. Editors can then locate clips based on scene number, page number, word or phrase search.

For captioning, Premiere Pro’s recently announced Speech to Text and Auto Captions features, which are powered by its Sensei machine learning technology, will automatically create a video transcript and then generate captions on the timeline that reflect the rhythm of the spoken dialogue and match it to the timecode video involved.

The Content Aware Fill is also powered by Sensei, which removes boom mics or moved panels with just a few clicks – and the new Roto Brush 2 masking tool, which streamlines the manual rotoscoping process. For anyone who’s spent hours spinning hair or blades of grass, this, like the other time-saving AI solutions we’ve described, can be life-changing.

“With the Roto Brush 2 tool, creators can select actors in a scene and place them in an entirely different environment, essentially unleashing the benefits of a green screen without actually using one,” says Byron Wijayawardena, director of strategic development, Digital Video Audio at Adobe.

“Today’s AI no longer needs a green screen because it hasn’t learned the color green,” NVIDIA Page adds. “What he learned was how to follow the outline of your sweater, or how your hair feathered in the background. It’s actually a person, a character or a object extractor. The more you train it, the more it learns and that’s the power of these tools – it’s their ability to learn. Imagine what we just did for the VFX artists and the rotoscopes of the world.

Take the rabbit out of the hat

Along with automating repetitive tasks, we’ve also seen AI post-production tools solve problems that were once seemingly impossible without additional hardware or rework.

Problems such as removing background wind – something that is currently possible thanks to solutions like WindRemover AI from CrumplePop.

“AI is getting used to creating a lot of weird stuff right now, which is fun right now because it’s new,” says Gabe Cheifetz, founder of CrumplePop. “But it’s not very useful, and the novelty will quickly wear off. It’s a bit like when people first got their hands on Photoshop: “The photo is black and white, but the rose is color!” Unbelievable!’

“For us, it’s much more interesting to use AI to remove the real obstacles. Since the invention of the microphone, wind noise has been a problem. The plugin we developed, WindRemover AI, is a very handy use of AI that solves this problem, and it’s available to everyone from large production companies to individual YouTubers.

Another perfect example is the scaling of images. Whether you’re updating older video or just cutting lower resolution footage with more modern shots, upscaling can make a big difference. But it has also traditionally been extremely expensive to do with the hardware.

AI solutions like Topaz’s Video Enhance AI can change that. Video Enhance AI moves upmarket, while also using AI to rid images of artifacts like moiré, macro blocking, aliasing, and other issues that can affect various lower quality cameras and images.

“As our processes mature, our AI will soon have the ability to hallucinate details between images as well, creating images where there were none before,” says Taylor Bishop, developer. of products at Topaz Labs. “This will effectively allow you to double or triple the frame rate of your source footage while maintaining the level of quality you expect.

And the AI ​​shows no signs of slowing down. We’ve seen a whole new AI research from NVIDIA called GanVerse3D, which can turn a 2D image into an animable 3D model – including mesh and textures – in seconds.

And we recently looked at a new system, Dynascore, which uses artificial intelligence to create musical scores that match your edits by breaking down a piece of music and reassembling it around small chunks of sound.

For grading, we were also impressed with Colourlab.ai – a color grading tool that uses AI to take looks from a shot or reference image and apply them from consistently across the entire edit, even matching them to different cameras.

“Ultimately, this system will work on your mobile phone,” adds Dado Valentic, founder of Colourlab.ai. “You’re going to point your camera, it’s going to recognize what’s in the shot, and you don’t have to be an exceptional cinematographer or colourist to get a professional-looking result. real time. “

What happens next?

If there’s one thing all the experts agree on, it’s that we’ve probably only scratched the surface when it comes to how AI can change jobs for the future. .

“There are so many exciting things,” adds Andrew Page, director of advanced media and entertainment technology at NVIDIA. “Things like understanding the voice, for example. Why couldn’t an artist say “take these imported clips and treat them like this”? This is the long term vision.

The way we develop AI for publishing is likely to change too. “When I started getting excited about AI at work, I was frustrated that it was mostly developed by data scientists rather than creatives on the shelf,” says Valentic.

“In post-production you can’t just say ‘if the results aren’t good enough we’re just going to improve the data that feeds into the AI ​​algorithm’ because everyone has a different idea of ​​what’s pretty good, which is why in the future we’ll see more creatives explaining how AI works and what problems it’s supposed to solve.

For EditShare CTO Stephen Tallamy, it is also crucial to examine how AI systems can introduce biases that can negatively impact diversity in the future.

“If your production process depends on an AI to index the video but is unable to understand an individual’s speech, due to a dialect, a disability or a lack of language support, these people may become invisible to production, ”he concludes.

“We are excited about the potential of remote production increasing diversity within the industry. We need to make sure we don’t impact this opportunity by selecting technologies that have been trained with a tight dataset, which codifies biases that have been present for thousands of years.

Share.

Comments are closed.