Continuing the ongoing series of interviews on fantasy user interfaces, it’s my pleasure to welcome Clark Stanton. In this interview he talks about the chaotic nature of productions and how screen graphics fit into supporting the story, differences between working on set and in post production, the pace of episodic productions, and the potential place of generative AI in this creative field. In between these and more, Clark talks about his work on screen graphics for “Chicago MED”, “Free Guy”, “Moonfall”, “Tomorrow War” and “The Peripheral”.
Kirill: Please tell us about yourself and the path that took you to where you are today.
Clark: Hey, I’m Clark Stanton, I’m a creative director and playback graphic supervisor at Twisted Media. I have been creating and facilitating screen graphics for a little less than 11 years. I started after school at a luxury realty company, and moved on to freelance motion design before landing my first full time job at a small production company. I started my career in film and TV after I met Derek Frederickson (who you’ve also interviewed) at a Chicago motion artist meetup where we became friends. A little later on when he got particularly busy, and convinced me to drop my other motion design job and join Twisted Media as his first full time employee. I had no idea what to expect and was incredibly lucky to fall into a dream job.
Kirill: Do you ever think how lucky we are to have been born into this digital world, and not 200 years ago where we would have to plow fields as peasants?
Clark: Absolutely. I’ve been a nerd through and through since I can remember. My brother and I started the first Starcraft club at our junior high. We were always networking computers across the house and playing with technology. We were fortunate enough to have access to some of that pretty early on in our development, and it spawned a love for computers and technology – that helped lead me to where I am in film. Honestly, I can’t even imagine doing anything else.
Kirill: As you joined the field and got a glimpse or two about how the sausage is made, if you will, do you remember being particularly surprised by how things work behind the scenes?
Clark: I always thought that every production had a plan that they were executing, these perfect machines that churned out magic. But the real nature of production is there is plan, but it’s always chaotic. You have all these individuals coming on a new group project, and everyone’s learning how to work with each other well. They’re building the plane in real time, making decisions in real time. Production might be focusing on one area or another because it’s extra complex, and our part in telling the story is usually pretty small in the grand scheme of things. So because we always have the golden parachute of going green and figuring it out later in post. We often get pushed to the edge of that creative decision making.
It’s been interesting to learn how to advocate for yourself, to make sure that the production gives you the answers you need, so that you can solve their problems and move the story along – hopefully in camera. And when that doesn’t work, then it’s always fun jumping into post and learning where the edit went and what actually needs to be there. Sometimes you have a huge elaborate graphic that you built for set, and it gets cut down. Maybe they liked the actors’ reactions, and now they’re changing the edit and trimming down or transforming what was in the original script to make the story more clear, until you’re left with just one of the original ten beats.

Screen graphics for the loading simulation on “Moonfall” by Clark Stanton.
Kirill: Do you have a preference to work on set versus in post?
Clark: I personally like to build the graphic for set. It’s more fun to play with the dynamic timing. You don’t know how the actor’s going to read their lines, or how much they’re going to pause or emphasize something or play up the drama. Having the reference there for them so that they can point to the screen or read the screen or react to it is much more enjoyable for me.
That said, there’s so much more freedom in post. You know where the edit’s going. You know exactly what you need to see. You usually have a little bit more time on post to get those creative answers and have the attention of the director or the producer. You can play with that part of the story a little bit more precisely.
Kirill: Is there a particular set of skills a person needs in order to not get burned out?
Clark: As a person that’s a workaholic by nature, I think that you need a hobby that isn’t in front of a screen. I’ve always played video games. I’ve always loved creating my own art. This industry has taught me that I need to stand up more and get away from my desk, and that’s my main advice to somebody that is going to try and get into this industry. Make sure that you have a good hobby that you can disconnect with. This industry is feast or famine. You have to take the jobs when they come, sometimes they overlap, and that’s a recipe for burnout.
I personally lift weights, swim or go biking. That has definitely helped me to come back fresh and creative for the next project.
Kirill: There are different names to this industry – playback design, screen graphics, fantasy user interfaces. Do you have a preference on which one best fits what you do?
Clark: Screen design or playback graphics is more how I associate. FUI comes in a lot more when you get into some of the cooler, more post fantasy projects, where you get to explore a holographic HUD in Iron Man or Spider-Man. They’re all applicable. I favor playback, so I tend to lean that way. But if you’re talking to a layman, they might only recognize FUI because of the nature of press coverage for the more recognizable projects.
Kirill: Putting on that layman’s hat, how do you tell me what your job is and why it’s needed? I have my layman’s phone in my pocket and my layman’s computer at home, and those have plenty of user interfaces already on them.
Clark: The easiest way I get it across to people is through cues. If a director wants you to start from the middle of the scene, the actors need to remember where they were, and then continue that scene from the middle. Now, if they have real phones for texting back and forth, you’re interacting with those messages. What happens when it’s a reset? Props has to run in and delete all those text messages, but also to reset the phone time because it’s been 5-10 minutes since the last take. That’s one aspect of it – replayability and manipulation.
The other part is the legal aspect. If you have a villain, Samsung or Apple probably don’t want them to have their product in there for the bad guy. Creating fake interfaces allows you to get around legal clearances.

Screen graphics for the lander cockpit on “Moonfall” by Clark Stanton.
Continue reading »
Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome back Tom Hammock. In this interview, he talks about the scope of blockbuster franchises, building physical sets and things that are still missing in digital worlds, traveling the world to make movies, and the rising of generative AI tools. Between all these and more, Tom dives deep into what went into making “Godzilla vs. Kong” and “Godzilla x Kong”.
Kirill: What happened to you since we spoke earlier last year?
Tom: Well, we made a movie called “Godzilla x Kong”, which was great. And there’s another one that I just finished with Zach Cregger who directed “Barbarian” in 2022. But really the big thing was Godzilla, and as you can imagine, it took a long time to make that movie.
Kirill: One thing I couldn’t figure out is how to say the name of the movie. Is it “together with Kong”, “against Kong”, something else…
Tom: It’s tricky to figure out. I know they tried a lot of titles, and “Godzilla x Kong” worked in the wrestling sense, in terms of them teaming up. That was always the goal. We knew we wanted to do the “versus” film and then the “team” film.
Kirill: How many previous Godzilla and Kong movies have you watched?
Tom: Adam and I watched pretty close to all of them, genuinely. When we started “Godzilla vs. Kong”, Adam had a little TV in his office, and he scoured the internet for VHS, laser discs, anything he could find. He would play them constantly, and we’d come in and reference scenes. It was an interesting process. I’d say that I’ve seen big chunks of the majority.
Kirill: Are we talking also about the 30-odd Japanese ones?
Tom: He was able to get his hands on pretty much all the Japanese films, and another seven or so King Kong variants from around the world. We put in our time. You see little Easter eggs sprinkled through. Every once in a while, there will be something.
Kirill: Did you know that there was “Godzilla Minus One” happening roughly around the same time?
Tom: We did. It’s similar to when “Godzilla vs. Kong” came out, we had a sister movie, which was “Shin Godzilla”. Toho and Warner Brothers Legendary talk a bit and plan things out, because it’s not just the films. You have the trailers, you have the toy releases, you’re trying to build the excitement. Also, Takashi Yamazaki and Adam Wingard are friends.

On the sets of “Godzilla x Kong”, courtesy of Tom Hammock.
Kirill: How was your experience going from the smaller art department on “X” and “Pearl” to this huge machine?
Tom: It’s a crazy change in scale, but in some ways it’s the same experience. You have your base camp, and you make your way through all the trailers and all the equipment. And at the end of the day, it’s still two people talking in front of a camera, which is the same as “X” and “Pearl”. But in other ways, Godzilla is so different.
On “Godzilla x Kong” we could go to literally an island to build the Monarch base on the beach. We had had the resources to do that, to get that level of in-camera work – which was fantastic and a ton of fun. Budget goes up, but expectations go up too.
Continue reading »
Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome Imanol Nabea AEC. In this interview, he talks about the transition of the industry from film to digital, working through the global pandemic, the potential impact of generative AI on the industry, and the variety of screens in our lives. Around these and more, Imanol dives deep into his work on the two seasons of the fan-favorite “Warrior Nun”.

Kirill: Please tell us about yourself and the path that took you to where you are today.
Imanol: I was born in Spain in 1971, when I finished high school I went to an engineering college. I always wanted to become a photographer, but I never thought about working in the movie industry because I didn’t know anyone in that world. At some point I quit the engineering school and started at an image and sound program in a technical public school. That was around 1992, I started shooting short movies with some friends, and there was going to be a movie in my hometown Bilbao, and a friend of mine told me that they needed a video technician to work with the Hi8 recorder of that time. I did two movies as a video technician, and after that about 10 years as a camera second assistant, 13 years as a focus puller, shooting second units as a camera camera operator – and always thinking about being a director of photography. I did short movies, I did a lot of photography for myself, and that’s my background. I never thought about working in this industry, but suddenly I was there, and I was fascinated by it. I love it.
Kirill: How was the transition from film to digital? Do you feel that something has been lost during that transition?
Imanol: I don’t remember exactly when it was. At that time I was pulling focus and I lived it as a soft transition, but I was having conversations with colleagues in the industry talking about this transition, and I remember saying that it was not going to happen, that things would continue as they were with film. Film wasn’t going to die. I guess I couldn’t – or didn’t want to – see the reality.
Film has such a beautiful texture, and I couldn’t see how digital was going to take over. Sometimes I watch a movie, and I see something that is there, and you can’t put your finger on it, but whenever you look at how it is shot, you realize it is shot on film. I still think that film has something that digital doesn’t have, and not just the texture. You see highlights, you see how different colors interact with each other.
But then there’s the other side of it. I worked as a focus puller on about 50 movies, and every day I would be waiting to hear from the film lab to see if there was anything out of focus or too soft. As a focus puller I could sleep much better with digital than with film.
Things are so much faster with digital. You have to shoot much faster. You don’t have those breaks you had when you were reloading the magazines or checking the gate. The timing is different. In a way, the set has changed. Everybody can see what you are shooting, from the director and yourself to other people that have access to the monitors. Now everybody can instantly form an opinion, which is not necessarily a bad thing. It is just how it is.
The set is not as silent as it used to be. Negative costs money, and each take was money spent, so everybody kept quiet once the camera started rolling. Right now there is a sense that digital is free, that you can keep on shooting and shooting. It costs money, but the economics are different. Every roll of film was money, and once you use it, you can’t overwrite anything on it. There was a mutual respect from everyone on the set.
So that’s the two biggest things that have changed – the texture and the way of working. Focus pullers today can do their job sitting in a room watching a monitor – and that was simply impossible back in my time. It has changed, not for better or worse, but simply different. At the end, the basics of the light are still the same, but you have to adapt to how the set works now.

Kirill: When you meet somebody outside of your industry, and they ask you what do you do for a living, how do you convey the complexity of the different roles you play on the set?
Imanol: I would say I translate the script into images playing with my tools – which are the light and the camera, and that also I am the director’s right hand, or at least I try to be. I help a director to tell the story. I talk with all the departments that are involved in what appears in the frame because all of us create the visual image of the film.
For me it’s crucially important to understand where the director wants to go with the story, what he wants to tell with the camera and the light. Every director has a different way of facing a story, and part of my work is to adapt myself to that, and I love it because it makes me improve and learn from different points of view.
Continue reading »
Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome back Owen McPolin. In this interview, he talks about changes in technology and lighting, the impact of Covid on the industry in the last few years, the potential impact of generative AI on the industry, and what advice he would give to his younger self. Between all these and more, Owen dives deep into his work on the first two seasons of the gorgeous adaptation of a seminal sci-fi classic “Foundation”.

Owen McPolin on location in Fuertaventura for Beggars Lament sequence in “Oonan’s World” episode. Courtesy of Owen McPolin.
Kirill: Since we last talked back in 2014, what changes have you seen in your field since then, perhaps around the evolution of cameras and lights, or maybe where these stories are told?
Owen: The biggest thing that has changed has been the type of lighting we have used in the last number of years. Everything has transitioned to LED lights. It’s great for the environment, it’s more reliable, it’s more accurate compared to the conventional tungsten and HMI lights. I now have the ability to cue lighting and to build entire cues of lighting far more easily. Now, not only you can change a whole array of lighting systems and bring them to bear on a scene, but you can also change colors and contrast, you can move the heads, and they can all be controlled centrally from a desk. We used to be able to control dimming, but now we have so much more expansion of control, and that has affected me and my work.
Back in 2019 I did a job in Budapest for Netflix on the show “Shadow and Bone”, and we had a fantastic local gaffer called Krisztián Paluch. He has a team of expert and experienced board operators in the newer LED systems. One day we found ourselves in a scene where a number of characters had come into a forest environment, and the director of the episode wanted to rehearse the action in this large denouement scene in one. So together, Christian and I, we had developed eight or nine large cues to take place over. That sequence took about 15 minutes of screen time after it was all cut together, and when we were done with it, he told me that it had more cues in it than any of the “Blade Runner 2049” sequences that he worked on with Roger Deakins.
I told him that we would have never have been able to do that a few years ago, and he agreed. He had a small remote control queuing system handheld device, and he used that to trigger the cues as he stood beside me and the director on the set, relaying straight back to the Wi-Fi system on the dimmer board operator. Some of the cues were needed to be so accurate that Krisztián needed to stand in front of the performance as they went, because there was a slight lag in some of the transmission back to the monitors. and you’d miss it.
We needed to generate a large amount of light around Alina Starkov’s character, and to maintain that ability of hers while the antihero came in who generated a lot of darkness. The whole queue is a ripple of darkness that runs over the set. And there was also gunfire and other elements, a lot of complicated cues all in one sequence. That was a real eye-opener for me. In a way my job is more difficult, but in another way, far easier. I can check with the director on what they want to happen in a scene, and build cues without feeling that it might be unreliable. It has opened up confidence to do sequences that, just a number of years ago, I would have thought to have been out my reach artistically, creatively, and technically.
As far as cameras go, there’s always new models coming out. You have more megapixels and larger resolution. Now 4K is the base level, and they’re talking about higher resolution cameras, and larger exposure latitudes within those new sensors. New Sony models have hyper-accurate ISO depths of 26,000 ASA, which is mind-boggling. This new sensor technology hasn’t washed over the dramas that I’ve been doing at the moment, but I expect that to happen soon.
There is obviously AI on the horizon. We might not necessarily see eradication of certain processes on the set, but certain efficiencies are expected. There are emerging AI systems for storyboard renderings. There will be machine learning for lighting systems, and that’s going to make a big difference. And then you can start thinking about artificial general intelligence (AGI) for the creation of sequences within a show. That can make a huge difference in pre-vis where VFX and real-time capture come to planning. Say, a director wants to pre-vis a space battle sequence. Today it takes a VFX house a lot of man hours, and there’s a lot of revisions to every such sequence again and again. With AGI you might form that storyboard much quicker, and then tweak it as well.

Continue reading »