Outdoor Digital Display

Can AI Replace AfterEffects?

Can you take an Illustrator / Photoshop file and animate with AI and bypass AE altogether? The challenge was to create a seamless loop on a keyable background. Next, can you choreograph the movements enough to generate usable elements for a narrative? This project proved to be working smarter not harder with AI
Can AI Replace AfterEffects?

Can you take an Illustrator / Photoshop file and animate with AI and bypass AE altogether? The challenge was to create a seamless loop on a keyable background. Next, can you choreograph the movements enough to generate usable elements for a narrative? This project proved to be working smarter not harder with AI

Outdoor Digital Display

Outdoor Digital Display

Studio Shoot

Not a, is a proof-of-concept for title sequence built to push the limits of visual storytelling using AI video generation.

The brief? Create a cohesive series of character-driven moments, all grounded in one location, but with wildly different personalities.

Using the studio name "Two Ton Beast"as a creative anchor, we marked, tattooed, and branded a range of creatures, each one carrying its own emotional tone, visual style, and subtle sense of character. Some stared back. Some stood still. Some farted. Some breathed like they were thinking something they wouldnt say out loud.


The goal was to test the boundaries of AI-generated direction, can a prompt yield emotional range, nuanced behavior, and visual consistency across characters?  Its a glimpse into where performance, stylization, and technology start to blur. (Sora, Kling, ChatGPT, Magnific, MidJourney)


Text to Video Prompts

Studio Shoot


Not a, is a proof-of-concept for title sequence built to push the limits of visual storytelling using AI video generation.

The brief? Create a cohesive series of character-driven moments, all grounded in one location, but with wildly different personalities.

Using the studio name "Two Ton Beast"as a creative anchor, we marked, tattooed, and branded a range of creatures, each one carrying its own emotional tone, visual style, and subtle sense of character. Some stared back. Some stood still. Some farted. Some breathed like they were thinking something they wouldnt say out loud.


The goal was to test the boundaries of AI-generated direction, can a prompt yield emotional range, nuanced behavior, and visual consistency across characters?  Its a glimpse into where performance, stylization, and technology start to blur. (Sora, Kling, ChatGPT, Magnific, MidJourney)


Text to Video Prompts

New

Original

Secret Garden

Fifteen years after originally producing my Sci-Fi (SYFY) TV ident on a traditional set-with an actor, keyable backdrop, CG, and compositing-I recreated the entire sequence using AI. I refined numerous text-to-video prompts, carefully adjusting seed values, lighting, camera angles, lenses, environments, and actions to achieve a consistent, art-directed look. This was my first experiment in maintaining visual consistency and custom actions through prompt engineering. Achieving realistic hands and scissors proved challenging, often requiring multiple renders to almost reach a believable result. (Sora, Kling, ChatGPT)
Not Le Mans 1965

The goal of this exercise was to achieve a consistent look and feel across multiple AI text-to-video platforms  by crafting structured, stylistically coherent prompts. From there, I assembled the content into a clear, cohesive narrative with a defined beginning, middle, and end. (Sora, Kling, ChatGPT, Magnific)



AI Lab
Collection of experiments, random ideas.

Outdoor Digital Display

Can AI Replace AfterEffects?

Can you take an Illustrator / Photoshop file and animate with AI and bypass AE altogether? The challenge was to create a seamless loop on a keyable background. Next, can you choreograph the movements enough to generate usable elements for a narrative? This project proved to be working smarter not harder with AI

Outdoor Digital Display

Outdoor Digital Display

Studio Shoot


Not a, is a proof-of-concept for title sequence built to push the limits of visual storytelling using AI video generation.

The brief? Create a cohesive series of character-driven moments, all grounded in one location, but with wildly different personalities.

Using the studio name "Two Ton Beast"as a creative anchor, we marked, tattooed, and branded a range of creatures, each one carrying its own emotional tone, visual style, and subtle sense of character. Some stared back. Some stood still. Some farted. Some breathed like they were thinking something they wouldnt say out loud.


The goal was to test the boundaries of AI-generated direction, can a prompt yield emotional range, nuanced behavior, and visual consistency across characters?  Its a glimpse into where performance, stylization, and technology start to blur. (Sora, Kling, ChatGPT, Magnific, MidJourney)


Text to Video Prompts

New

Original

Secret Garden

Fifteen years after originally producing my Sci-Fi (SYFY) TV ident on a traditional set-with an actor, keyable backdrop, CG, and compositing-I recreated the entire sequence using AI. I refined numerous text-to-video prompts, carefully adjusting seed values, lighting, camera angles, lenses, environments, and actions to achieve a consistent, art-directed look. This was my first experiment in maintaining visual consistency and custom actions through prompt engineering. Achieving realistic hands and scissors proved challenging, often requiring multiple renders to almost reach a believable result. (Sora, Kling, ChatGPT)
Not Le Mans 1965

The goal of this exercise was to achieve a consistent look and feel across multiple AI text-to-video platforms  by crafting structured, stylistically coherent prompts. From there, I assembled the content into a clear, cohesive narrative with a defined beginning, middle, and end. (Sora, Kling, ChatGPT, Magnific)



AI Lab
Collection of experiments, random ideas.