Hey, I just "created" one in under 2 hours...
Message:See other comment. Mine is better, because I at least told the AI to map period actors into the roles, which involved actual thought. I mean I let Basil Rathbone be a hero, and Lou Costello as Hulk? That's inspired casting.
Assembling the video? Dude, I could do that in an hour - and that's setting up all my grain overlays and text templates so that the NEXT video edit takes me 10 minutes. And that's still doing it manually with the skills I spent decades honing.
(Edit: Sorry, I finally watched the Star Wars video and made the erroneous assumption that the editor put some motion - panning or zooming - on the images, or some transitions. Not just 10 seconds of still stills. Video creation time is now two minutes or less. In fact, the majority of the time is now taking the renders into On1 Photo Raw, dragging them into the correct order and running a batch rename
Operation. Once that's done I drag all the files into a media bin in Vegas, then drag the entire media bin to the Timeline. Then, add a music track. Done. Silly me, assuming there'd be ANY effort taken by these lazy asshats at any single stage of the process.)
AI animation? It's here. AI modeling? Already here. AI writing? Here, but still sucks. Still, actual magazines have temporarily closed their submission portals from the recent flood of AI written crap. AI audio editing/FX is here. AI voice is here - and your union VO artists are all carefully watching their contracts because studios are trying to sneak in clauses to allow source recordings from job >x< to be fed to the AI so the artist doesn't need to be hired for jobs >y< and >z<.
Last July I watched an episode of "VFX and Chill" - a YouTube show sponsored by Red Giant Software where three pro VFX artists livestream tutorials and create stuff in real time - where an AI was used to create a still character. An AI was used to generate a depth map of the character. Another AI was used to create the base animation for the character. Yet another AI was then used to create the voice for the character. And the last AI did the lipsynch animation. Total time, about an hour. No, they didn't do this on the show - but to make a character FOR the show. Talked about building the character late in the main show.
https://www.youtube.com/live/a1fIhqo5Xww?feature=share
But the video I'm REALLY linking is at the bottom of this comment.
I wouldn't give a damn except we're still on a millenia old economic system of "work for tokens to buy food and shelter," and this stuff costs jobs.
Automation and computers have already taken many, many jobs throughout history, but the spate of AI is about to take a lot more.
Our classic Sci-fi had the robots automating the crap physical labor leaving us free to do the creative/intellectual jobs and/or live lives of leisure. Instead AI is about to take creative/intellectual jobs and leave nothing but the grunt work. (AI tools can write code, you know).
Just saying a little Trek-style socialism would do some good.
Anyways, Here's the live-plate/AI processed comparison of Anime Paper-Rock-Scissors from Corridor Digital. If you want the explainer videos, those are on their channel. But, to sum up.
1) become a successful VFX studio that can afford machines which can train Stable Diffusion locally - right now that requires 128GB of VRAM which is $10k in GPU per machine. Plus the rest of the machine.
2) wrote a script, record the lines, get some costumes.
3) shoot your greenscreen plates. This is easier when you have a studio.
4) train Stable Diffusion on screen captures from a specific anime - note this is use of unlicensed, copyrighted material. Further note multiple lawsuits will shortly set legal precedents on whether or not it's legal to do that.
5) purchase a 3D model from megascans. This means someone with a high-def $4000 Matterport 360-degree camera went into a real location, took a shitload (thousands) of pictures and let a machine learning network stitch all the pics into 3D geometry via photogrammetry . Note this is now an obsolete method. Neural radiance field tech now means this type of model creation can be done by shooting video on a phone and running it through a machine learning algorithm. And the NeRF will be better quality than the photogrammetry!
6) move virtual camera around 3D model and take screenshots for your BG plates - note Corridor literally did a PrtScr of the viewport display, not full renders.
7) feed live action plates and screen caps through AI.
8) Composite in DaVinci Resolve - also, use the Deflicker filter a lot to smooth out the AI errors-per-frame.
9) Sound design. Here they still used human labor.
10) get millions of views. That's fine. In general, Corridor worked their asses off for a decade.
11) talk about how these tools "democratize" art creation while ignoring the fact they used $10k computers to run the free software.
So, video link...
https://youtu.be/ljBSmQdL_Ow
Yours,
IronMike
02-Mar-2023