- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Tyler Perry Puts $800M Studio Expansion On Hold After Seeing OpenAI’s Sora: “Jobs Are Going to Be Lost”::Tyler Perry is raising the alarm about the impact of OpenAI’s Sora on Hollywood.
Jobs are going to be lost
And it is now a self-fulfilling prophesy because a lot of people just lost out on working on his studio expansion project.
This announcement is performative. Probably just a “good” reason to back out of something they didn’t want to do anymore, anyway. Otherwise there’s paper on the deal and they wouldn’t back out.
That’s true. If he really wanted to expand, he would just expand into AI filmmaking too. This sounds like a money problem, not a technology issue.
Yep - those jobs are going to be lost because of Tyler Perry seeing an opportunity to reduce production costs, not because of AI. Nobody is forcing him to use AI. It’s just classic corporate greed where he is happy to trade [other people’s] jobs for more profit. Sure, AI is enabling him to do this, but it’s greed that’s guiding his decision making here.
To be fair, most Tyler Perry movies could be replaced by minute long clips of stock footage with some filters stuck on them.
Sora can sometimes do 1 minute clips that mostly look ok as long as you don’t pay too close attention. We are incredibly far away from coherent, feature-length narratives and even those aren’t likely to be thematically interesting or engaging.
Yep. I watched their demo clips, and the “good” ones are full of errors, have lots of thematically incoherent content, and - this is the biggie - can’t be fixed.
Say you’re a 3D animator and build an animation with thousands of different assets and individual, alterable elements. Your editor comes to you and says, “This furry guy over here is looking in the wrong direction, he should be looking at the kangaroo king over there, but it looks like he’s just glaring at his own hand.”
So you just fix it. You go in, tweak the furry guy’s animation, and now he’s looking in the right direction.
Now say you made that animation with Sora. You have no manipulatable assets, just a set of generated frames that made the furry guy look in the wrong direction.
So you fire up Sora and try to fine-tune its instructions, and it generates a completely new animation that shares none of the elements of the previous one, and has all sorts of new, similarly unfixable errors.
If I use an AI assistant while coding, I can correct its coding errors. But you can’t just “correct” frames of video it has created. If you try, you’re looking at painstakingly hand-painting every frame where there’s an error. You’ll spend more time trying to fix an AI-generated animation that’s 90% good and 10% wrong than you will just doing the animation with 3D assets from scratch.
Now say you made that animation with Sora. You have no manipulatable assets, just a set of generated frames that made the furry guy look in the wrong direction.
“Sora, regenerate $Scene153 with $Character looking at $OtherCharacter. Same Style.”
Or “Sora, regenerate $Scene153 from time mark X to time mark Y with $Character looking at $OtherCharcter. Same Style”.
It’s a new model, you won’t work with frames anymore you’ll work with scenes and when the tools get a bit smarter you’ll be working with scene layers.
“Sora, regenerate $Scene153 with $Character in Layer1 looking at $OtherCharacter in Layer2. Same Style, both layers.”
I give it 36 months or less before that’s the norm.
I agree, I don’t think people realise how early into this tech we are at the moment. There are going to be huge leaps over the next few years.
Or just “take the frame and replace the head with the same face pointed a different way”.
This seems like a fundamental misunderstanding of how generative AI works. To accomplish what you’re describing you’d need:
- An instance of generative AI running for each asset.
- An enclosing instance of generative AI running for each scene.
- A means for each AI instance to discard its own model and recreate exactly the same asset, tweaked in precisely the manner requested, but immediately being able to reincorporate the model for subsequent generation.
- A coordinating AI instance to keep it all working together, performing actions such as mediating asset collisions.
The whole system would need to be able to rewind to specific trouble spots, correct them, and still generate everything that comes after unchanged. We’re talking orders of magnitude more complexity and difficulty.
And in the meantime, artists creating 3D assets the regular way would suddenly look a lot less expensive and a lot less difficult.
If all you have is a hammer, everything looks like a nail. Right now, generative AI is everyone’s really attractive hammer. But I don’t see it working here in 36 months. Or 48. Or even 60.
The first 90% is easy. The last 10% is really fucking hard.
I’d imagine eventually we’re gonna get something like in painting.
And ironically when we do get to the point where an AI can string together a semi-coherent narrative, the first things it’ll start to produce will probably be exactly the sort of mid-level dross that Tyler Perry likes to make.
This won’t get used for key narrative content. This will be used to a lot of b-roll and the quick cuts that audiences don’t examine closely. A lot of a movie is content like that, and since the dawn of the effects industry, editors and effects artists have known that they can get away with janky stuff in certain places. The audience won’t know it’s there because they’re not watching the film frame by frame.
It seems pretty good with backgrounds though, and it’s only going to get better. I think the threats of job losses are a lot more imminent than people are ready to admit.
Great use of passive voice sir
People will be laid off
Spoils will be enjoyed
This great economy… it will endure
The working class will survive!
It will be really interesting to see how long it actually takes before this can be done accurately enough to execute a directors vision and high quality enough to actually make a film from. It could be anything from a few months to decades, it’s so hard to know how much we are actually able to control these models to get them to do what we really want accurately enough.
It will be really interesting to see how long it actually takes …
We went from Will Smith eating Spaghetti to Sora in just 12 months. Whatever the time period turns out to be I think it’s safe to say that it will be far shorter than most people would like.
Probably you are right, but so far no one has demonstrated any LLM that can be controlled within these tight types of adjustments and it feels like it might be something that the technology just never is able to do. We might have to wait for a whole new generation of technology for this.
We might have to wait for a whole new generation of technology for this.
We may but this tech is advancing at a pace we haven’t experienced since the 90s. If you weren’t around back then I can tell you that “next generation” literally happened every 12 months or less.
I was working tech in the Bay Area in the '90s, I remember it well.
I mean most new technologies have a period of explosive growth followed by eventually slowing down and making only gradual gains. So it really just depends if we’re near the end of that development or if there’s still more explosive changes to come.
This is the best summary I could come up with:
Over the past four years, Tyler Perry had been planning an $800 million expansion of his studio in Atlanta, which would have added 12 soundstages to the 330-acre property.
Now, however, those ambitions are on hold — thanks to the rapid developments he’s seeing in the realm of artificial intelligence, including OpenAI’s text-to-video model Sora, which debuted Feb. 15 and stunned observers with its cinematic video outputs.
As a business owner, Perry sees the opportunity in these developments, but as an employer, fellow actor and filmmaker, he also wants to raise in the alarm.
In an interview between shoots on Thursday, Perry explained his concerns about the technology’s impact on labor and why he wants the industry to come together to tackle AI: “There’s got to be some sort of regulations in order to protect us.
After seeing Sora, what are your current feelings about how fast AI technology is moving and how it might affect entertainment in the near term?
I was in the middle of, and have been planning for the last four years, about an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size, we were adding 12 more soundstages.
The original article contains 1,226 words, the summary contains 198 words. Saved 84%. I’m a bot and I’m open source!
He’s right. This is going to be a hugely disruptive technology.
Does this hopefully mean he’s throwing in the towel? Orrr, is he just going to save money by being an early adopter? (screaming noises)
This is a Hollywood killer and I’m all for it. I’m tired of them dumping millions into the same drab script while hogging all the profits.
Every job lost is a potential new indie company, fuck Hollywood.
The money will be dumped into AI
The new scripts will be derivative mashups of the old scripts.
An independent will create a successful film
The new scripts will be derivative mashups of that script.
It’s a step up from what we have right now which is basically no independents and every script being a derivative mashup.