How The Irishman’s Groundbreaking VFX Took Anti-Aging To the Next Level | Netflix
It was like the army.
You followed orders,
you did the right thing,
you got rewarded.
Some of the greatest films
follow characters whose journeys
span many years.
Where are we gonna go
But as filmmaking has evolved,
so, too, have the tools filmmakers use
to show characters aging and de-aging.
You’re right, Mr. Morgan!
As filmmaking techniques
have grown to include
not just makeup and costumes
but also digital technology,
so has the need for innovations
that ensure the integrity of actors’ performances
at any age shown on screen.
Together, Martin Scorsese and his production crew,
Netflix, and ILM
have teamed up on The Irishman
to push the boundaries
of this visual effects process.
The film takes place from 1949 to 2000,
and it goes back and forth in time continuously.
Problem is by the time I was ready to make the film,
Bob De Niro and Al Pacino and Pesci
could no longer play these characters younger
and when I looked at the script, it turns out that,
you know, it means I make half the film with Bob.
Why do it?
I’m shooting Silence in Taiwan
and Pablo of ILM came up to me and said
I think I could make them look younger,
and I said I don’t know,
I can’t have the actors talking to each other
with golf balls on their faces,
it maybe gets in the way with the actors
and with the kind of film this is,
they need to play off each other.
I said, if you could find a way
to lessen the technical aspects of it,
that could work.
So I just kind of took a breath
and I said, you know, we’ll develop the technology.
I got on the phone with ILM and I said,
I got a project!
First thing that Danny said to me
is this is too risky,
don’t do it.
I said well, is this the way
you felt when you did Jurassic Park?
And he just went like okay, fine,
you got me there.
I’m Rob Bredow,
I’m the executive creative director
and the head of Industrial Light and Magic.
To get to partner with Netflix and take a risk
on creating that technology while we’re
making the film
is the kind of thing kind of ILM lives for.
There have been many movies that have de-aged
over the years,
but this one really, where it’s front and center
and is part of the storytelling.
We are used to difficult projects here at ILM,
but on this one we have to take to the next level
the performance capture, the lighting acquisition,
the set scanning, because we need
way more accuracy than ever before.
The challenge of creating compelling
and believable digital humans
is really the holy grail of visual effects.
My name’s Paul Giacoppo
and I was the digital character model supervisor
It was important to us that this process not interfere
with the acting or directing in any way.
Knowing that, we decided
what if we can’t come up with a system
that actually captures the performance
and doesn’t touch anything else?
The only way to convince anybody
that this could be done is to do a test.
I said why don’t we shoot a scene that we all know
that will ground us into the ages
that we want for the movie?
In New York on that test shoot,
Bob came out and got in character,
got in front of the camera, started reenacting
the scene from Goodfellas…
You’re gonna get us all pinched,
you fucking bozo.
What’s the matter with you?
You’re gonna get us all pinched,
you fucking bozo.
What’s the matter with you?
Everyone was completely silent, just in awe
watching him reproduce the scene.
Then, all of the sudden we heard booming laughter
and everyone got super nervous, oh my god,
someone’s in big trouble, and all of the sudden,
Marty popped up from behind the monitors
and was exclaiming it’s just like it was 25 years ago!
When you came in to Netflix and you presented
that test, that first opportunity,
I remember before you walked in the meeting,
I was very sort of doubtful, obviously
the fact that it could be done.
We found with the test that we did,
it really was possible.
So we immediately divided into two groups,
one that was developing the camera system
and the capture itself,
and the other one that kept working on the software.
One thing that was very important was that
Marty would not feel restricted in any way
in the way he shot this film
because of the technology.
To do that, we developed this new rig,
which is the camera surrounded by
two what we call witness cameras,
which are infrared cameras.
And so if you if you were to take a look
at those infrared images,
what you would see is that there’s no shadows.
Every one of the cameras is giving you
a different point of view
of that specific performance,
and the more point of views you have,
the better your 3D translation.
We shot a lot of the movie with two
and sometimes three cameras simultaneously at least,
with the complication of this technology.
We had to test different rigs, different materials,
cables, to figure out how to make a rig
with three cameras that you could operate.
We designed it to be rigid but
at the same time allow us the flexibility
to pan and tilt each of the cameras independently.
It was called the Three Headed Monster.
It didn’t look like a monster to me though,
it was kind of nice. Three sets of different videos
and I’m just trying to deal with the actors.
The biggest challenge I think was to try
to distill what their younger selves
looked like in this film.
See if I can give you a hand.
We spent about two years researching
footage for the three actors.
ILM downloaded a ton of their films so that we could
study and research them and take screen grabs
and just collect this huge library of their work.
We went through an entire library of scenes
extracted from all of their past films
across a spectrum of ages.
We realized that there was no
one stereotypical Robert De Niro,
there was no one idealized Joe Pesci.
They bring something to the performance
that changes the way they look.
So we needed to actually come up
with what was their persona
for this movie, for instance Robert De Niro’s
around like mid 30s when he was in Taxi Driver
and Deer Hunter, two very iconic roles for him,
and even if you look at him in those movies,
he looks slightly different,
he took on a persona for each of those movies.
Keep in mind that we were not looking to re-create
younger versions of the actors,
but rather new creations that are
younger versions of the characters.
What the fuck is this?
I know it is. I know it is.
A big point of discovery for us was to learn about
how each actor’s face performed and how
they used the anatomy of expression to create
the characters that they were playing.
From working with Marty, I knew
it was going to be a very extensive and long movie,
and so we knew we were going to have 1700 shots.
Everything was about performance,
so we had to be incredibly meticulous about
preserving every detail of each of the actors.
We didn’t use our traditional animation pipeline,
so we didn’t have animators basically re-creating
the performance of the actors in the film.
My name is Douglas Moore,
I’m a layout supervisor here at ILM.
One of the very first things as part of this process
was using our technology called Medusa.
We did a Medusa capture of Robert De Niro,
Pacino, and Pesci.
And what that it is having them sit in a chair,
and they’d go through a series of expressions
that we’d call the facts expressions,
and from that we would
generate a series of models that can move
then from one expression to another.
What the software then would do would solve
the way that animation, those shapes would
blend together on a particular frame.
The software’s called Flux, the F stands for facial
and the lux is for the lighting component of it.
With Flux what we were trying to do is
we were trying to capture
the facial performance of the actors.
We would take the images that were shot on set
and we would use our tools to analyze the performance
in order to create a 3D model of what was shot on set.
Flux is able to see how the face is shaded
and how the lighting is hitting it,
and capture the subtle nuances, a little twitch here,
a little wrinkle up in the nose, actually
generate a lot of detail directly from the plate.
Whenever our faces are moving or emoting or such,
there’s a lot going on that, as humans,
our eyes and our brains know about but we ourselves
don’t actually think about.
As your muscles move in your face and contract and relax,
you’re actually changing how blood flows
through your face, and it’s something
that we had to work into our assets
so that when they were actually talking and emoting,
there was some life to their face
that we wouldn’t have had otherwise.
We also studied many details like the pores of the skin,
the way light scatters through skin,
details like De Niro’s classic mole,
that characteristic feature, and each one of them
we tried to create their most realistic younger self
according to every piece of information
and photographic and filmic evidence we could find.
Not being required, for an actor, to have to wear
all this bulky gear and, you know, all these things
that distract from the performance,
I think that technology is a game changer
for these kinds of movies.
Everybody’s going to appreciate it.
It’s a good thing.
It becomes a matter of taking all of these different
tools in our pipeline and putting them together
to solve these facial performances and then
also take the solved facial performances
and put them on their younger selves.
The re-targeting part. Ah, damn, so difficult.
The camera is making sense out of the pixels
that it sees and the lighting and the texture
and the infrared light. and you know,
it’s all kinds of, you know, a bunch of calculations.
Then you take that and you make it look like
a younger person, there’s all kinds of
different things going on
there where you actually design the character,
you pick up what’s important, what is not,
you know, what you want
that behavior and likeness to be.
And that is, like I said, not a math problem,
it’s an imagination problem.
If we’d had to do this all work by hand,
there’s no way we could have done it
to the level of detail that we did,
because an animator isn’t going to get
all the details that we were able to get
from these tools
There was not an animator on this show,
we had the Flux team and that was really it.
With this new system, we were trying different
disciplines of people. We started thinking,
well what is this similar to?
It is very technical, so we pulled in creature TDs
and layout artists and effects artists that normally
do particle sim, and they all brought
something different to it.
The first reaction is like,
what so am I looking at?
There’s a lot of details that hopefully if we’ve done
our work well, will be invisible.
When you actually get them in a shot
and you see them moving and talking
and acting and it works,
it’s just an amazing feeling.
It’s a treat to know that
what we’re producing is something
that is not only just a real recreation
of the moment and the scene and the motion,
but yeah that somebody like Martin Scorsese
can look at that and say yeah, this is what I was looking for,
this is what I was going for,
this is the Robert De Niro young version that I remember.
I think it’s the first time this kind of thing is done
for this type of narrative film,
this type of character film.
I think this is gonna be another one of those things
that really does change the way films
can be approached and the way actors can play
themselves at different ages in a way that just
wasn’t possible before.
I think I would want these to be
a referendum on technology.
If after they see this movie a bunch of actors
say, you know what? I’m not wearing any markers.
Why do I have to wear markers?
Why do I have to be in an environment where
it’s not conductive to my job,
That is the achievement that we want to portray.
No one has taken audiences on a journey
quite like Martin Scorsese with The Irishman,
but as with every innovation in film,
the technology is only as powerful as
the performances that push the story forward,
and this story
can be felt in every frame.