PIERO.
AI & VFX8 min read

How a VFX artist uses AI in 2026: my daily workflow

This isn’t theory. It’s what I do every day. Where AI enters my work, where it doesn’t, and why twenty years of post-production are the real competitive advantage in the age of artificial intelligence.

VFX artist and AI — professional workflow 2026

The wrong question and the right one

“Will AI replace VFX artists?” It’s the question I’ve been hearing for two years. The short answer is no. The long answer is that the question itself is wrong. The right one is: how does a VFX artist’s work change when generative AI tools are available?

I use Runway, Veo, Kling and other AI video production tools every day. Not as an experiment — as part of my real production workflow, for real clients, with real deadlines. Here’s what I’ve learned.

Where AI enters my work

Pre-visualization and concept. Before AI, exploring ten creative directions for a scene meant days of work. Today I generate variants in hours. For Doppelganger — a campaign for a creative grant — I used AI to generate the project’s entire visual base, then refined everything in post-production. The full case study shows the process.

Environmental element generation. Skies, textures, backgrounds, organic elements. AI produces excellent base material that I then integrate into scenes with the same compositing techniques I’ve used for twenty years. The difference is that I used to spend hours searching stock footage or painting matte paintings — now I have a base in minutes.

Fully AI-generated content. For Roche I created a Christmas jingle entirely with generative AI — video and audio. It’s the kind of project that would previously have required a significant budget across production, animation and post. With AI, cost drops dramatically while maintaining a professional result. See the result.

Rapid prototyping. A client wants to understand if an idea works before investing in production? I generate a visual concept in AI in a few hours. If it works, we proceed with full production. If not, we’ve saved weeks and thousands of euros.

Where AI doesn’t enter (yet)

Live footage integration. Need to insert a 3D element into real footage with camera movement? That requires camera tracking, lighting match, multilayer compositing. AI doesn’t do this with the precision needed for a professional product.

Long sequence coherence. AI struggles to maintain visual coherence across consecutive shots. Same character, same light, same environment — frame after frame for thirty seconds or more. For this, traditional tools and the eye of someone who knows what to look for are still essential.

Pixel-level control. Luxury brand commercials, cinema films — every frame must be perfect. AI produces subtle artifacts that a distracted audience won’t notice, but a creative director will. When absolute perfection is needed, expert hands and precise tools are required.

Creative direction. AI generates images, it doesn’t tell stories. The ability to build a visual narrative, to guide the viewer’s eye, to create emotion through editing and rhythm — this remains deeply human.

The real competitive advantage

Anyone can generate a video with AI. The barrier to entry is nearly zero. But here’s what happens in practice: 90% of people using these tools can’t judge if the result is good. They can’t recognize artifacts. They don’t know how to refine output. They don’t know how to integrate it into a professional workflow.

Twenty years of post-production have given me something AI can’t replicate: the eye. I can look at a generated frame and understand in a second whether it works, what needs correcting, how to integrate it with the rest of the project. I know when AI output is sufficient and when manual work is needed. I know how to combine AI and traditional tools in the same project without the transition being noticeable.

This is the difference between a “video made with AI” and a professional video that uses AI as a tool. The same difference between someone who buys a camera and a photographer.

How it will change in the coming years

The tools improve every month. Runway Gen-4, Veo, Kling — every release closes gaps that seemed insurmountable six months earlier. Temporal coherence improves, camera control improves, output quality rises.

But the principle doesn’t change: someone who knows what to do with these tools will always be needed. Someone with the experience to judge, direct, refine. The market isn’t looking for “someone who can use Runway” — it’s looking for someone who can produce a professional result using all available tools, AI included.

That’s where I’ve positioned myself. Not as an “AI artist” but as a professional with twenty years of experience who integrated AI into his workflow before others. And it’s exactly the figure the market is looking for.

Have a project in mind?

If this article gave you useful ideas and you want to understand how to apply them to your project, tell me what you need.