THE BEST GENERATED FILMS
MADE AND CURATED BY HUMANS
SEARCH
2 items found for ""
- A Beginners Guide to Generative Film - Written by a human
We get it — this is a scary, unknown new threshold that entertainment is crossing into with Artificial Intelligence. Fears abound; losing one’s job to AI, the legality of it all, getting asked to now complete more work in less time, the feared soullessness of new work coming out, the democratization of tools, new workflows and software to learn… it’s a lot. But I’m here to assure you that no matter how AI affects entertainment and the creation of it, a few things are not going to change. Coming up with a good, engaging story and characters will be as challenging as it’s always been. The development phase of filmmaking will remain similar, now with additional tools to aid us. Post production will also stay much in tact. And the time, energy and creativity needed to realize a story with visuals and audio will still be taxing, requiring thought, originality and time-intensive work. The flip side of that? Budgets can come down drastically. Teams can be smaller. Production gets merged with post production. Many elements of the process will be doable remotely. In addition, generative filmmaking is a powerful concepting tool. Will generated films eventually look identical to live-action? It’s possible. But the happy accidents and in-person dynamics that add to the magic of live-action work could be less present. The many nuances captured from real actor performances on camera may be hard to replicate with AI. Considering the vast amount of visual styles available with ai tools, such as anime and ‘Pixar’ style, we’ll see a lot of animation projects brought to life. Others will be more photoreal and lifelike — a new hybrid entertainment. So we can look at generated films as a new format that shares similarities with methods we already know well, along with their own new rules, quirks and workflows. We will also see the combination and supplementation of live-action scenes with scenes generated using AI. My point? The tech will never overtake good old-fashioned storytelling, filmmaking and acting. Those skills will remain a necessity in order to create top quality work. The great news is that anything is possible with AI. I mean anything. Any film or show you have written or want to produce, any commercial you want to make or documentary you want to create is now entirely in your power to realize, and on a much smaller budget with less people involved. Let’s explore. Development & Pitching Parts of the development and pitch process that are being affected by AI tools include the creation of treatments, pitch decks, mood boards, sizzle reels (or fake trailers), and concept art. We are already seeing the inclusion of AI generated imagery in treatments for commercials, music videos, film and TV. Since we are speaking about development documents and not final products, more leeway might be given to generated imagery and video. But that doesn’t mean anyone wielding these tools can automatically create great stuff for use in project development. Generated imagery and video can also be used to cut together a sizzle reel or trailer that further brings to life and visualizes a project to potential investors. And yes, concept art is being affected as well (sorry, Artists). While there is no substitute for human-made concept art, new AI tools are enabling faster output with a huge amount of variations and tweaks available to serve a director’s notes. Pre-visualization Artificial Intelligence tools can be utilized for various previz and other pre-pro needs, including storyboarding, set and art direction concepting, animatics and camera tests. Storyboarding will be thoroughly improved and streamlined with AI tools, as well as the entire scene visualization process. From mapping out a scene in the virtual version of a real location, to figuring out optimal angles and camera movement with your DP, everything from framing to lens choice can be simulated and iterated at scale. AI tools that will be explored in this document can bring new possibilities and key advantages before you ever get to a location or set. Documentary Filmmaking Top level docu-series are already being supplemented with AI sequences for historical re-creation, as well as public figure recreation. All the tools exist now to recreate events that occurred without having to go out and film them or animate them with traditional methods (including costly, period-appropriate elements). Anyone with intermediate computer/internet knowledge and design skills now has the ability to recreate anything that ever did (or did not) happen. If you have photographs, video or the voice of a real person (dead or alive), there are tools available now that can be fed a little bit of these assets and give you back a lot. Feed in two minutes of a historical figure talking, it can give you any combination of words and sentences in their voice. Feed in some photographs of a person, then generate various expressions and angles. This still takes time and work, and all legal precautions must be considered. Final Content This brings us to the question: will we be using AI to make final content in its entirety? If things go the way they look to be going… yes, this is going to be a thing. You can’t underestimate how far good editing, sound design, music and voice acting can take the visuals that you create with generative AI. If it’s proven impossible for you to raise that 15 million budget to make your dream movie, you’re now able to take a script you own that you feel most passionate about, learn the tools, put in the work, pay your helpers and finish a complete film. Netflix turned down your series pitch? Make the pilot episode using the help of AI to tell the story. In the end, it’s the best stories with the most memorable characters that will be enjoyed the most, no matter the AI techniques used. To get the best performances, you’re still working with human voice talent. To get the best writing, human writers. To get the best editing, human editors. Put it this way — an ai film without some humans at the wheel can feel soulless. If you use AI to replace every single human talent involved with your film, it may not turn out to be that great. Humans are the ones with the VISION and INSPIRATION. The magic of human touch cannot be replaced by AI, just as celluloid still remains way cooler than digital and pictures painted by hand just hit you different than ones created in a computer. Happy accidents and the human condition are so entwined into our DNA that they serve as way-markers for verified human storytelling, which artificial intelligence can never 100% accurately replace (but it’ll get damn close). Want to write a soulless script and sell your soul at the same time? Have Chat GPT write your entire screenplay. If someone’s AI film isn’t verifiably written by a human, there can and will be systems in place to let you know of it before you watch. Voice, Dialogue and Audio AI tools are advancing how we can create and synthesize voices and sound effects. While utilizing real voice actors for all dialogue may yield the most nuanced reads, there are now options to synthesize human voices (with their permission) to say whatever you want utilizing just a snippet of dialogue from them. Having a person walk and talk or be a talking head using just a reference image and reference voice is already possible. Software like Pika and Runway are integrating many new AI tools that can generate sound effects based on what’s happening in the shot. These tools can save time, though control and selection can be limited. HeyGen, Runway and D-ID are integrating new ways of creating mouth, eye and head movement that connect with supplied dialogue. While these mouth movement tools aren’t yet perfect, they are developing rapidly. Software SORA // the new benchmark from OpenAI, not available to the public as of this publication MIDJOURNEY // construct the image HEYGEN // dialogue and editing RUNWAY // animation, edit, dialogue STABLE DIFFUSION // imagery and video, continuity ELEVEN LABS // dialogue D-ID // dialogue LEONARDO // construct the image PIKA // animation, sound TOPAZ LABS // upscale INSIGHT FACE SWAP // continuity ADOBE PRODUCTS // fix up everything and edit it all together ... and about one thousand more Learning Curve and Workflow Every creator is crafting their own unique workflow in this space, but there are many similarities to be found and overlapping strategies. Most of the tools and software listed on the previous page have relatively low learning curves and are very user-friendly. The workflow usually comes down to: concepting and gathering reference material > mass image iteration with AI > selections > animation > combining all assets and sound > cleaning it all up Standard editing software such as Premiere or DaVinci can be utilized, as well as new editing platforms included in software such as Runway. Color correction, audio correction, dialogue generation and upscaling can all be handled within a small amount of software applciations that now include AI tools for quicker results. This makes it all very possible for SMALL TEAMS and INDIVIDUALS to complete "large scale projects" on relatively small budgets, with timelines akin to animation. The Future Pending an AI fueled apocolypse and the subsequent collapse of civilization, we will see new opportunities within the worlds of art and entertainment, where people previously unable to visualize their stories and ideas are now fully enabled. A new renaissance that is untethered by budgets, gate-keepers, corporations and other limitations. Much of it will be crap. There will be a lot of sifting and curating to do as new platforms spring up for showcasing these creations. An entire new business ecosystem may spawn out of that. But as the cream rises to the top and new voices are heard, top creators will be recognized by platforms, competitions, festivals and awards. While much of what we’re seeing right now can be labeled as abstract or experimental, more long form narrative content is beginning to proliferate. In the world of narrative, story and characters remain supreme, no matter the medium or production techniques.
- The 5 Person Movie Studio
Generative Filmmaking is ushering us into an era where a feature film or TV pilot can be produced by a team of 1 to 5 individuals. Just as traditional major studios are ailing and in consolidation mode, ai has emerged on the scene as a completely disruptive force to be used by the people, able to allow more creators to visualize larger stories at a much smaller budget and crew size. As the tools evolve, the desire for good stories and dazzling visuals will remain. Content that cuts through the static and makes you feel something. Original stories that require critical thought from viewers and lead them on a journey. Fake movie trailers, visual spectacles (what we might label as experimental film) and music videos might be considered the low hanging fruit of early days generative filmmaking-- they don't require much continuity, dialogue or individual scene pacing. I'd like to see the best creators using these tools to begin to tell stories that feature dialogue and continuous characters, really constructing scenes that have a beginning, middle and end within a story that has a beginning, middle and end... letting each scene play out with the pacing of a film or TV episode. That part of the process ain't as easy. Creators are waiting for the lip sync to improve, but we'll be there soon enough. And great voice acting (with talented human actors) will elevate these scenes immensely. Traditional filmmakers -- talented directors and writers of film and TV -- are looking at most of what's coming out now in the ai world and thinking: "Sure, they can make a few good looking shots, but can they actually tell a longer format story?" There's an immense ocean of difficulty between those two achievements. We're obviously at a very 'visual-heavy' moment with what this new technology is allowing people to create in their computers on a shoe-string budget, and that's fine. It's dazzling and empowering and nothing to be taken for granted. But when these creators are put to the test and tasked with creating an emotionally powerful scene that involves dialogue, acting, scene editing and pacing... all of that needing to fit into the larger context of a story with continuity and sympathetic characters... well, there is the real challenge. That's where good screenwriting comes into play. Many ai creators probably want to take this next step into longer format right now... but they're missing that most crucial element: a superb script. A compelling screenplay can take years to conceptualize, write and revise. Today's ai visual artists may have ideas and concepts they've been toying around with in their head for years... but translating those ideas into well-structured, compelling films filled with dozens of scenes and believable dialogue is not an easy task. Short films, which we are seeing many of, are a unique beast of a challenge within themselves. Well-made narrative shorts such as THE MYTH (ones that don't rely on experimental structures) are, in this humble human's opinion, the true showcases of this new medium's capabilities for future storytelling. THE MYTH contains great scene pacing, with an original concept that's fun and well-executed. Hats off to that team for making it in 48 hours. But the film is 2 minutes long... and when we begin to see 20-minute-long or 60-minute-long content that maintains this level of narrative quality throughout (which will most definitely need to include at least some dialogue and character interaction), that's where the true storytelling magic begins to happen. Whether the visual style of those stories is animation, anime, hyper-realistic or abstract, viewers will still be thinking about one thing: do I care what happens next?