At times futuristic, at times mirroring the outer edge of the latest research, and almost always an integral part of the story. In this conversation Jayse Hansen talks about fictional user interfaces, his approach to the initial research, interaction with the movie director and visual effects supervisor, and how these interfaces shape and are shaped by their real-world counteparts. Jayse has recently completed his work on “The Avengers” and he delves into how shooting in stereo has affected the design of Iron Man’s helmet HUD. He also talks about the future of user interfaces and, in particular,  exposing huge sets of data for smart consumption and effective non-linear navigation.


Kirill: Let’s start with your background and how you got to working on user interfaces in movies.

Jayse: I was just a kid that loved film and wanted to get into it, but I had to do it in a roundabout way because I lived far away from the main centers of action, and didn’t really have the means to go to school for it. I started by teaching myself graphic design for print from books and asking designers everything I could. Most told me I’d never be a designer unless I went to school for years and years. What I found was that some of those schooled designers came out very cocky but lacking the skills to actually back it up. So I took the hard road, and bought a library of books on design, asked for critiques from designers I respected, and grew as much as I could.

I then moved into Flash-based web sites, did some UI design in the old Stardock theming community, and finally into motion design. And then a friend of mine, Mark Coleran introduced me to the fact that people actually get to design screens for films. If you’re a good UI-designer and a good motion-designer, then you might be a good fit for FUI’s (Fictional User Interfaces.) I thought that would be about the best job in the world.

I realized that since I was little I’d always paid attention to the screens in film, for example, the x-wing targeting displays in “Star Wars” or the holographic globe in the “Return of the Jedi”. With shows like ‘Knight Rider’ or the Delorean in ‘Back to the Future’ I’d always wanted to know what all the displays on his dashboard were and how they worked.

I sent Mark some UI work I’d done and luckily he didn’t laugh at it. He thought it was decent enough to start sending my name around. So I got my first gig doing concept work for Dead Pool’s vision and Striker’s command center screens on “X-Men Origins: Wolverine“. And after I got through the first door it just kept on going because there really are only a few people out there that specialize in this type of work. I think the detail involved might kill a few people, but I thrive on it.

Kirill: What is the process of creating screen graphics for these fantasy user interfaces? Who do you usually work with?

Jayse: There are usually two sides to it, although they’re somewhat merging. One side is called “playback design” where you’re designing before they start filming. In this case you may be working with the film’s art director. You may get a part of the story or a part of the script and you’ll be designing screens for them to use on-set while they’re filming so that the actors can actually interact with it. The other side is called “post-replace” and those are designed and composited in post-production after the filming is done. There, you’re usually working with the VFX Supervisor and the director.

Hero screens seem to be becoming more and more post-replace, as the director and editors have a clearer vision at that point of what the screens should contain. “Hero screens” are more about story-telling or clarifying. The director will focus on them to move the plot along or to quickly clarify a certain point.  They’re usually in focus and seen closeup, sometimes even full screen as in “2012“.

I’ve been doing hero screens for the most part, and for this type of work I’ll usually get a rough edit and all or part of the script. Reading through it is never wasted time – as I pick out parts of the story that really inform the design and the aesthetic. Then it will be a conversation with the director, vfx supervisor and the leads of the company that I’m working for about how to make it look super sexy and tell the story at the same time.

Kirill: Do you get a rough estimate on how much screen time and screen estate you’re going to get?

Jayse: Sometimes screen-time has not been determined yet and we’ll have a little of a back and forth discussion. For example, for “The Rise of the Planet of the Apes” they came to me through G-Creative Productions to do the boardroom sequence where James Franco is explaining how a new drug, ALZ-112, is going to help cure brain illnesses and make everyone smarter, including apes! In that sequence, as he’s describing the process, the drugged chimp goes crazy and jumps through the glass presentation screen at the very end. The entire sequence, which ended up at around three or four minutes in the final movie, was originally just text on a page. Although it was really well written, it didn’t have much in the way of descriptions of visuals.

So before filming began, they wanted to flesh it out in terms of what would be on the screen behind James, which on set was simply a large, 11’x17′ blue screen. In addition to visuals, they wanted to know the overall pace of the scene. I started by doing a roughed out animatic sequence. I needed pacing so I recorded my own voice doing all the actors’ parts. Director Rupert Wyatt used that on set to pace it out, and then about four months later, after they filmed it, they came back for me to do the final shots.

Kirill: So essentially you are doing a very dynamic pre-vis sequence, and not just a bunch of static image boards.

Jayse: Yes, it’s basically pre-vis for filming – but it’s also somewhat post-vis for the post-production work. I even had a little terribly-animated Poser-monkey in there just to convey some of the ideas.

Kirill: Do you get a relative freedom for your first explorations, and then refine it based on the feedback from the director or the VFX company?

Jayse: Exactly, yes. There’s a ton of freedom upfront and that’s what I really like about it. They are really relying on me to provide what I think the content should be, and so they’ll usually be very general in the script or description. So beyond that it’s up to me, or the team, to do the initial research and flesh it out.

Kirill: Let’s talk about the research phase. You did a virus proliferation map for “Contagion” and brain cell activity simulation for “The Rise of the Planet of the Apes”. Do you go to the technical literature on the subject to get a roughly plausible visual language?

Jayse: Yes. I really love doing the research. Unfortunately there is always a deadline, so you can’t get a degree in the fields I study – even though some of them are so interesting I’d love to learn more – like learning to fly in order to design the upgraded Iron Man HUD.  I will continue to learn that type of thing! But, because the pace is pretty fast, I do try to talk to people who either work in that field or have done that kind of work, and of course there’s a ton of web research.

For example, for “Apes” I made PDF’s of research for each of the 20 or so different story-points that had to be told.

I did some reading about chimp research, and how they display that data in terms of graphs, and a ton of work on how they map the brain. Jorik Blass has done great research in diffusion tensor imaging of the brain and it turns out to be this beautifully complex look which I recreated in Cinema 4D for some of the brain sequences.

Kirill: Do you have any specific directive to take that look a little bit beyond what is available in the real-life systems in that particular area, to make it a little bit more sexy or appealing?

Jayse: I always assume that they want something that feels grounded but is at the same time sexy, or “Hollywood” as they call it. I only had one time where they came back to me and said, “make it more Hollywood” (laughs). I tend to amp stuff anyways, and that’s my goal to make it look more polished, more sophisticated, as well as tied into the general art direction of the film itself.

Kirill: But then on the other hand you’re not supposed to steal too much attention away from the main characters.

Jayse: Exactly, and it’s a big balancing act. A lot of designs I do use the composition rule of thirds or the golden mean to direct the viewer’s attention to the actor. In “The Rise of the Planet of the Apes” the designs were done very intentionally to always make sure he was the focus. Sometimes the main graphic would literally ‘point’ to him.

With Avengers, I was directed to design the glass screens with a lot of negative space, or lots of black in other words, to make sure when you’re looking through the screens you can clearly see the actors’ faces and performances.

Kirill: Do you start with the physical medium of pen and paper, or do you go straight to the computer?

Jayse: I always sketch first. Though I’m also comfy with starting digitally first as well – I just prefer thinking with paper. I give myself permission to ‘draw badly’ and sketch out everything without rulers and without rules. Just let things flow from your brain kind of thing. I went through two notebooks for “The Rise of the Apes”, and three notebooks just for Ironman’s HUD for “The Avengers”. It’s the quickest way to test out ideas, and visually ‘think’. I always try to go away from the computer, away from electrical interference, get out. I have an art-room in my house with a big drafting table and a nice view out the window, and I like to go there to brainstorm.

Kirill: Do you keep a notebook to collect your ideas in between the projects?

Jayse: Yes, and I have that in both the physical notebook and on my phone. I would snap pictures and put them in Evernote. Mark Coleran has a huge folder of screen references that I was super envious of – so I started making my own. I came up with my own way of categorizing them by screen type – like ‘Facial Recognition’ or ‘Thermal Imaging’ or ‘Access Denied’. Inside each they’re broken down to real-world applications, film, hardware, and ‘unrelated-but-inspiring’. I track the existing movie references so that I’m not repeating them, but also to be inspired. I also have stuff that maybe is not related at all – a painting, maybe – but it has a neat feel or a neat color palette to it, and that is perhaps the best reference to use to make your stuff look unique and new.

Kirill: Getting back to playback designs that are used on the sets and hero screens developed in post production, which one is more interesting for you?

Jayse: I actually like both. I’m yearning to just do some background screens sometimes – pure interface-love. But I guess generally I like the hero screens best. Having a story to tell, or a narrative is one reason I love designing for film. You also have more definition of what is happening in the scene and you also tend to get more time to get it right.

Kirill: In your interview for FxGuide you mentioned that this was your first production done in stereo. This makes it even more important to work in post-production, as opposed to “flat” playback background sequence.

Jayse: Right. Although the screens themselves didn’t have depth, the interaction of the actors with the glass screens was all improvised on the set. They filmed with just glass screens, with no tracker markers. It was interesting to get the footage and see how different actors interact differently with these screens. Nick Fury (Samuel Jackson) was just moving and swiping things – usually in a somewhat angry manner. Banner (Mark Ruffalo) would timidly slide things slowly, purposefully up and down, and then Tony Stark (Robert Downey Jr) would just go crazy turning things, sometimes using his index finger and pinkie to slide. We would always think that he was probably just messing with us, “let’s see what they come up with” type of thing. There was really no direction in terms of what it would be.

Kirill: So you’re designing each type of the interface around this fluidity.

Jayse: Yes, we designed and animated it to work with those movements. Another awesome designer who I worked with, Jonathan Ficcadenti had the fun task of coming up with the graphics behind a thumb-twist and dual finger slide, tap tap. We came up with a lot of ideas over a few beers – but in the end it took a few design variations to actually get that to work.

Kirill: Apart from doing more modeling for the stereo environment, do you see that it affects the way you approach designing the on-screen elements?

Jayse: This especially affected the HUD. This was the first time that it was done in stereo and I was surprised by how much it changed pretty much everything that we did. A lot of cheats would no longer work – because stereo reveals that they are just cheats. Also – they didn’t want shallow depth of field in these HUDs. In the previous HUDs you could get away with a lot of stuff being out of focus and blurred out to get some sense of depth. And so it was easy to put things in front of his face or move things around more dynamically. But stereo purists like stuff to be mostly in focus because the viewer’s eye is what determines what is in focus. We were worried about it at first, thinking: are we going to deliver two versions, one for the 2D version of the film, and the stereo one for the 3D version? And that would have been just a nightmare, so we had to come up with something that would look good in both 2D and 3D.

The consequence of this is that everything is in focus – so everything is readable. We ended up with a much more detailed look where we can read, for the first time, the small micro text and so we made sure that all the text is related to the scene, and all the graphics had a purpose – which is the way I like to design anyway.

There was another thing that happened in the stereo process. In the previous HUDs you could fake things in terms of where they were in Z space. You could have something that is closer to his face but small, and it would look all right. You’d put it closer to have it move less with the track and look more stable – so it’s a great cheat in 2d. But once you put it in stereo, suddenly it would look like it was inside his face. It was kind of an optical trick that worked in 2D, but doesn’t work in 3D and you realize, wait, that’s actually inside of his cheek – that looks awful! So you had to redo the positioning and scale of graphics to make sure that it felt right in stereo as well as looking right in 2D.

But designing in stereo also gave us the unique opportunity to use depth to differentiate designs. There are two HUDs in the film. The first HUD, Mark VI, designed for Iron Man II, was kept a bit more flat and the intraocular distance less spread, for less stereo depth. The second, new updated Mark VII HUD – the one he jumps into during the final battle sequence – was where we pushed the 3D depth a bit more. Really it was just spreading the two left-right cameras in After Effects a bit farther apart.

Kirill: It would seem that 3D is more affected by the difference in how people perceive it through these special glasses they hand out in the theaters. Did it happen that you got such a feedback that some of the parts would appear inside Tony Stark’s face, but it looked all right to you?

Jayse: We had a lot of going back and forth between different people (laughs). And a lot of tests! The visual effects supervisor, Janek Sirrs and the stereo team had the final say. We relied on their expertise a lot. In terms of pixels, we had worked out where his cheek actually lived. We looked at it in the blue and red format, and you can count the pixel offset between the two versions to estimate whether the specific element was too close or too far out. Stephen Lawes (Cantina’s Creative Director) and I developed a ‘cheat sheet’ for our team of people working on the final stereo run.  It listed general z-space coordinates for each major element to keep them consistent from shot to shot. He’d spent a lot of time figuring out the ideal space of the elements so that it didn’t feel too empty or too crowded.

Kirill: Do you see this as the path forward, stereo-first productions for sci-fi movies?

Jayse: Yes. Like it or not, so far it’s proven to be a good money-maker, and if nothing else, that will continue it. I actually really liked working on this in stereo. I think that HUD was one of the things that was meant to be seen in stereo. Once we did our first stereo tests, we all got pretty excited about how it looked with depth. It felt like you were in the HUD with Tony Stark, which was pretty cool.

Kirill: Did you have any specific instructions on what to avoid in your stereo work?

Jayse: Well I learned that there’s a James Cameron approach to 3D – which is to have the most foreground object be at picture plane and everything else to recede back. And that’s a really nice way of doing it. And there’s another way of doing it, the more typical way, where you have your main action at picture plane and you have stuff at front come forward into the audience, and background things go back. But having stuff come forward ‘into the audience’ can tend to produce headaches – especially if you have a lot of action – and camera shake – basically all the things you see in our HUDs. You’d also have a lot of elements cutting the picture-plane. So we were aiming for picture-plane and back since that worked really well on “Avatar”, for instance.

But we had a problem with the HUD. You can put all the graphics that are in front of his face at picture plane, and then have Robert Downey Jr.’s face recede super far into the background – but then the HUD feels cavernous. We had a lot of different tests and we finalized on having the picture plane on his cheeks. Otherwise the HUD ended up feeling like a rather overly-capacious closet (laughs) when he was pushed too far back. But now we had graphics coming up front and we were worried about very fast 3D movements and motion shake as he’s flying. Would it make you dizzy or nauseous? And how would cutting from scenes where your eyes are focused on distant NYC landscapes to super close-up shots work. I have to hand that to Stephen Lawes, Janek Sirrs and the wizards at Stereo D. It ended up working really well.

Kirill: There is a lot of emphasis on interactivity. If you look back at “Star Wars” and “Star Trek”, or even the movies from 1990s, there wasn’t a lot of real interactivity. Perhaps there was not enough technical capacity to do this, but nowadays it’s an almost implied part of your job.

Jayse: Everything is interactive these days. It used to be keyboards and mice, but these days it’s all touch screens or holographic or a direct-neural-interface, such as the HUD which works off of both line-of-sight and the neural-oscillatory-activity of Tony Stark.

Kirill: Do you see this connected to the renewed interest in UX in the real world software, with touch-first hardware from Apple, Microsoft and other companies? Is this affecting what you do?

Jayse: Absolutely. People used to be a little bit mystified by a few things – such as interactivity – but now it’s completely natural for an actor to walk up to a glass screen and do swipe or twist movements to zoom in. That’s where it changed. It’s no longer mysterious; it’s just pretty much everyday life. Same with glass screens – people think those will never happen but they’re already here.

Kirill: And you get to cut corners here and there, as you’re not creating real-world systems for the real-world environment.

Jayse: Yes, we design them to a degree that it works for the story – which is the most important – and reality comes in as a second. Full functionality is lacking, as we don’t need it to be working in eighty different situations like the real software would, and we’re quite thankful for that. We simply don’t have that kind of time. So yes, we gladly cut a lot of corners. It’s also what allows us to think outside the box a bit. I’m not worried about how a programmer is going to have to make this all work.

Kirill: Looking at this the other way around – do you see the fantasy user interfaces leaking from movies into the real-life software, perhaps a few years later?

Jayse: I think it grows consumer demand for that type of thing. For instance I just started working on a ‘top secret’ real-world government UI design project. I guess they were jealous of the way government screens look on film (laughs). They don’t want theirs looking like an Excel spreadsheet when it could look better – and work much more efficiently – by being better designed. People ask me “do you prefer function or form first” and I think that they are inextricably related. A badly designed interface is not going to function very well, and that’s what I really like about user experience. You can’t separate the two. You can’t say it functions well without it being formed well.

Kirill: You work with real-life software tools on a daily basis. What do you see as the main source of pain for you as the user of the current crop of user interfaces? What would you change if you were a king in the world where the software works for you instead of fighting at you?

Jayse: It’s a good question. I think a lot of software could benefit from a nodal approach if done in the right way. That may be because I’m a very visual kind of guy, but the software that is currently done in that way – which is basically a lot of compositing software, like Nuke, Shake, or Flame – and how it works with nodes and links between the items really makes sense to me. Some of the recent real-world interfaces that I was working on reimagined the Excel spreadsheet look as a node-based interface, and I think it will work well that way.

I think one of the biggest challenges of software and data design is the amount of data that it has to make accessible to the end user. I’m extremely interested in ways of mapping complex patterns of information – looking at patterns rather than individual pieces – and I see a lot of work exploring that which is really exciting to me such as Circos.

I integrated parts of those ideas in the Mark VII HUD diagnostic. It showed him an immense amount of data on his suit at a glance, but yet you can’t read each individual thing. And you’re not meant to – he’s looking at the patterns of it. He could zoom in if he wanted to see each one, but he knows where each piece of data lives and he can see at a glance whether it’s OK, or whether it’s depleting or malfunctioning. And so he can read the patterns at a glance, rather than just raw information. I really got into it designing the HUD. This guy’s got so much info coming at him from Jarvis, his suit computer. How does he read it? What he’s supposed to look at? How can he read it and fly his suit at the same time? He has to be able to glance at stuff and absorb the data in an instant.

Kirill: Projecting that into the real-life software, you’re talking about investing time to understand how to reduce the visual complexity of the information.

Jayse: Maybe not reduce it. Simplicity is very relative after all. Some people, like Tony Stark, are very comfortable with immense displays of data. The challenge is how to display it in a way that is understandable and useful to the user at a glance. So it can remain complex – yet convey what you need simply and effectively.

Kirill: There’s been a recent explosion in the interest in infographics, how to take these big data sets and distill them down to the underlying traits and tendencies.

Jayse: Exactly. For so long we’ve relied on a linear display of information, such as a family tree. But there’s so much data now that it kind of fell apart in the 1950s and 1960s. For a long time there’s been so much data that we haven’t been able to organize it, the tree stopped working and we ended up with all this scattered data. But recent advances in network visualizations and network complexity mapping have come out and allowed you to organize all that data in a way that makes it useful for you, once you learn how to read it. A radial network map might look futuristic to you now – but it will soon be as familiar to you and your children as a family tree.

Kirill: Bringing it back to your film work, is this where your research comes in, to know what is that information that is supposed to be shown to the main character interacting with your screens?

Jayse: Yes. For “The Rise of the Apes” it was a lot of medical visualization, and for “The Avengers” it was studying the aircraft carriers and how they work.

For the HUD it was studying combat aviation HUDs and what kind of information they display – how they change in different modes etc. I consulted with an A-10 fighter pilot. I would explain the general situation of something like flying low through a city and ask him what kind of stuff would he want on his ultimate HUD. Then I’d add my sense of dramatic alternative UIs and merge it altogether. But that information changed how I designed the new Mark VII HUD.

If you just started out to design something ‘cool’ in a HUD – you’d just end up with a bunch of circles and random text and I think it would just feel wrong, even if you couldn’t put your finger on why.

Kirill: Can the work you’ve done to understand and map visual complexities of real-world data in fantasy user interfaces affect the real-world software?

Jayse: It’s possible. I’ve recently had a few new offers to design real-world UIs… so we’ll see. With Iron Man’s HUD it’s always a balance between too much data or not enough. But that’s the balance with real HUDs as well. I asked the fighter pilot, Johnnie Green about information overload and he said that the standard HUDs are always overloaded with information. In fact, he said the first thing he teaches his students is how to focus on the important stuff as you need it, how to zero in on the information that you need and blank out the rest.


Real F-18 HUD


Mark VII Iron Man HUD

Kirill: You mentioned growing up on the interfaces from “Star Wars” and other older movies. If you look at them now they look quite dated, as they were built within the confines of not only the old hardware, but also within the confines of what the designers could think about. Is this going to happen to your current work in 40-50 years?

Jayse: I’m sure it will. When I look back at the interfaces in “Star Wars”, I totally appreciate them and I think they were well-designed. They immediately tell the story of what they were trying to tell, and for that I love them. They do look dated because of their simplicity, but we may always go back to simple, so it could come back around. Minimalism and Maximalism are constantly in and out – like an ocean. And I love both – I just don’t love indecisive in-betweenism. I think we’ll probably see less grid-based design and more organic pattern-based data-design – and much more holographic displays which are really exciting to me. In the end though, good design will always be good design, and in a way it may always be timeless. If you look at the screens of “War Games” and the big command center at the end – and I saw a screenshot the other day – that was just gorgeous.

Kirill: Wouldn’t that be affected by your impression as a kid growing up seeing that? You’re not really viewing it objectively from where you are today.

Jayse: It’s always hard to be objective about this stuff when you have your own personal memories integrated into it. But design is a learned discipline, and I look at things very differently now than when I was a kid – and those screens still look pretty sexy in their lo-fi kind of way.

Kirill: There’s a recent trend – in “The Avengers”, “Avatar” and quite a few other productions – to gravitate towards translucent screens with no visible batteries or wires where you fling stuff between the screens and you can look at the 3D models projecting from them. Can you realistically expect this to happen in the next 15-20 years?

Jayse: Absolutely. There are DJs already using glass screens in their sets. You can be in the crowd looking at the DJ and see his face, see him interact with the virtual turntable.

I think they have a few things to work out, but it’s all being done right now. I can’t wait to see paintings being created this way. The Explay Crystal phone was just released with a transparent display. Think of it from an interior design point of view – a sleek office full of transparent screens looks so much more open and nice than one with ugly black LCDs blocking the view. And in Avengers, Joss used them beautifully – even having conversations between them.

Once they figure out how to parametrically blur and ND, or darken down the background they’ll find wider adoption. I can’t wait to get mine.

Part of designing for films is that sometimes people will say that it looks too advanced for this film, but you always want to design a bit more advanced than what’s current because you don’t want the aesthetic-life of the film to only last a year or two.

You want that film to look good five years from now, ten years from now. Nothing dates a film like old cell phones or having an outdated looking interface on it. So for Fury’s screens – they wanted them to be less high-tech and polished than Stark’s interfaces – yet still be very advanced. If they weren’t, in two years it would be like Fury walking up to his monitors and pulling out a Motorola flip phone with a red LED display. It’d immediately date the film and you’d be pulled out of the story for a second or two. So I’m always looking to never have the design be dumbed down or be too current just for his reason. You want it to hold up years from now.

Kirill: But you also you don’t want to go too far beyond what seems to be reasonably futuristic.

Jayse: Yeah you don’t want it to stand out for just being crazy. It will just draw too much attention for being too ‘out there.’ That’s where tying it in to the art-direction and design of the film as a whole is important to me. But keep in mind; sometimes we’re unaware of what is truly ‘current’. Some of the really amazing data design that looks so next-level-futuristic is actually being done currently… right now, in the real world. For me that’s pretty exciting stuff.

And here I’d like to thank Jayse Hansen for this unique opportunity to graze the surface of what it takes to create screen graphics and user interfaces for movies, and for sharing his process and behind-the-scenes pictures with us.

In my last entry i talked why i see “Tron: Legacy” as the best 3D movie experience that i’ve seen so far, and it’s my great pleasure to have Bradley G Munkowitz aka GMUNK join me for a conversation about the work he and his team did on the movie, extending it to the visual effects in general and a glimpse into the not so distant future of movie-going experience.

Kirill: I was reading about the background of the movie itself, and i’ve read that it started in 2007 when there was this big writers’ strike and it started with Digital Domain pitching Tron:Legacy to Disney. Were you part of that phase, or were you brought later?

Gmunk: I was freelancing for Digital Domain, so i’m not a staff employee there. So they hired me as an independent contractor to build a team that was involved after the VFX tests and pre-viz. They hired us to be the concept and the execution.

Kirill: So pre-viz is like a proof of concept?

Gmunk: What you’re talking about, the VFX tests, that is the look development. It’s “we want to make Tron and we want it to look like this, all dark, in a stormy world when everything is glass”. Then they take that look and they pre-viz the whole movie where it’s all done in 3D, but it’s not rendered. It’s a crude structure so that you can see and edit it, almost like the next step from storyboarding. What the shots are, where the camera is, where it’s moving.

Kirill: But it’s beyond the still shots and color schemes, right?

Gmunk: It’s in junction with. And so my team was brought in to do the look development and execution of key holographic UI sequences.

Kirill: So that was somewhere in mid-2009?

Gmunk: It was the end of 2009, yes.

Kirill: And it took you about a year until the final completion, with the movie released in December 2010.

Gmunk: Yes, it was about a year. The team scaled from me all the way up to five people who were brought in at different times [see the links at the end].

Kirill: It was about 800 people working on the digital art of the movie. So five people doesn’t sound a lot.

Gmunk: That’s true. Digital Domain has its own visual FX pipeline with giant teams doing giant strides. And we were this outside unit that just parked in a corner of Digital Domain facility and just did our own thing. So we were kind of a special team.

Kirill: But you guys did the vast majority of the real work on the VFX part

Gmunk: The VFX part of the select scenes in the movie. We were not doing the VFX for light racers, for example. We were doing small niche graphical interfaces, data visualizations. We did the opening titles, the fireworks and a few other scenes. It’s all on my site.

Kirill: You mention on your site that you worked in close collaboration with the movie director. Can you describe the process?

Gmunk: I knew Joe [Joseph Kosinski] personally since we worked together on the Hummer commercial two years prior, so he felt comfortable just coming over and sitting down with us. You get him for 45 minutes to concept out what it is that we’re doing, what the task is, sketching it on a sketchpad. And then we’ll take it for two or three weeks and show him our solution, he’ll critique it and we’ll start refining it. We had five really key reviews which would trickle into the VFX shots and then to the desk of the VFX supervisor who wants to have proper depth of field, for example. This is integration stuff that gets critiqued more and is in front of everybody’s eyes on the daily basis. For the core conceptual look development stuff we had three or four review cycles with Joe and then it would go into the VFX pipeline.

Kirill: So you were doing all of the scenes in parallel?

Gmunk: There was a ton of overlap for sure. We were just a team of five, and it takes a lot of time to get that super grunt work done. There were big tasks, like for example the solar sailor sequence which took us three months, while we did the opening sequence in about a month and a half. Fireworks were quick – there are a lot of these quick design bits where we would do interfaces for the bikes or the throneship – those were couple of weeks.

Kirill: You also mentioned in one of your summaries that originally you were contracted for about five minutes that later expanded to ten. Was it more scenes, or more content in each scene?

Gmunk: It’s definitely a good thing to get more opportunities to jam. It was definitely different scenes. We were killing ourselves doing the solar sailor, and they would come in asking to do the opening titles and we were “OK, let’s add that to our list”. Or we would be heavy into the throneship rectifier extraction where they were decoding the disk, and they’d ask us to do the fireworks. So it would be a completely different thing, and we’d have to develop and concept it. We looked at it as a good thing, because we were super-inspired and we just wanted to jam.

Kirill: Well, the end result is amazing. Did you have something that you worked on and then the entire scene was cut?

Gmunk: Not on this one. I’ve had that happen before. They would advise us on a lot of stuff. A lot of eyes looked at our graphics and had comments after the core development, and a lot of stuff was dumbed down a lot, but never killed.

Kirill: What do you mean by dumbed down? Removing extra fluff?

Gmunk: “Removing extra fluff” is a good description. It’s like taking out the greeble. It looks awesome – the more complex, deep and insane it looks, the better, but sometimes it sacrifices the quick read on what it is supposed to represent with its conceptual cues. And so, we definitely had to take out a lot of greeble so the point of the graphics was understood quickly.

Kirill: In all the portfolio entries and blog posting by you and your team mates you spend a lot of time describing the underlying geometric primitives and algorithms behind the implementation, but not a single mention of the 3D aspect. Who did the 3D part?

Gmunk: We did all of 3D. We would put a lot of compositing into the Digital Domain pipeline because we weren’t the end people for these shots. We did the elements and then work with the compositor that would comp our elements into the shot under mostly mine and Jake [Sargeant]‘s art direction. That was one of the things that we didn’t do a lot – comping.

Kirill: So that would involve mixing it with live acting?

Gmunk: Exactly. Mixing it into the shot with actors within, casting lights on the actors. That was the big thing, because the holographic elements that cast the light.

Kirill: Do you have special screen hardware to check how 3D looked?

Gmunk: Not on our screens. There was a testing stereoscopic 3D television called “The Hyundai”. We would go in there and check.

Kirill: So you would go there and check if it looks good?

Gmunk: We developed so much stuff. Digital Domain has a very special VFX pipeline, there’s nothing else like it. And we found a way to get these stereoscopic 3D cameras into our 3D applications, like Cinema4D. We used a lot of openFrameworks applications that had to take in these stereoscopic cameras. Digital Domain had a technical director Jon Gerber who helped us accomplish that, to write the scripts to get the cameras to go into Cinema4D and openFrameworks. And then out of our 3D apps we would render stereorenders and then check those on The Hyundai.

Kirill: There’s a lot of complaints that movies shot in 3D don’t look very good in a fixed-eye projection system – if you buy a BluRay disc for your home TV or go to a regular screening. Did it take care of itself because the 3D aspect in Tron:Legacy is so muted?

Gmunk: I think the use of 3D in Tron is the best out there. It was so tame and tasteful. We never had an interface to make that was supposed to knock you upside your head. Our stuff was always medium and wide shots of whatever holograms they’re using.

Kirill: That was also my impression. Even in the solar sailor scene where the guys extract the virus from Quorra’s DNA and there’s a lot of camera panning, it was all very subtle. Was it an explicit directive from the movie director, not to go overboard with 3D?

Gmunk: That’s very much Joseph’s style. Kosinski’s style is always very tasteful, beautiful and elegant. I think the best example was during the light jet fight. It was always done in a really tasteful way, even an exaggerated shot of a ship coming directly to camera and then diving back down when it runs out of gas. It doesn’t come super close, but just close enough so you see. That kind of stuff is awesome, it’s perfectly controlled – and that was all over the movie. In the light jet scene there were a couple of times where the jet flies by camera but it wasn’t right by camera, it was kind of off to the side. We got to see in dailies every day the work of the VFX supervisor, Eric Barba – he’s the man. Watching him critique a 3D image in stereo, pinpointing all the problems, citing interocular distances and the convergence points, picking apart the 3D image – it was amazing. That was my favorite part – watching Kosinski and Barba work together and critique images. It was being in a super advanced design school.

Kirill: Sounds fascinating. Do you think that the industry is still making baby steps, exploring the right ways to shoot, edit and composite 3D movies?

Gmunk: There’s a lot of movies that do 3D after the fact, and that’s awful. I think that “Transformers 3” is going to be a really good test of where it is right now. That is a 3D explosion movie, that’s an upper cut, that’s a smack you across the face grand effects movie, and if they can nail that stereo so that it is super frenetic, but in a tasteful way that doesn’t crush your brain, that’s a big step forward. My favorite 3D implementation is still “Avatar”. I thought the camera work in that movie was so beautiful, so well done. Every single shot, the camera feels completely hand-held.

Kirill: All of the movies that you mentioned – Transformers, Avatar, Iron Man and of course Tron: Legacy – are confined to the “escapist” genres of sci-fi, horror, animation and adventure. Do you think over the next few years, or perhaps a couple of decades it will transition to “Oscar-type” productions?

Gmunk: I think film makers are such purists, that for the sake of respecting the medium it will take them longer. Oscar-nominated movies will take perhaps ten to fifteen years before it happens. It should be in five, but people are going to stay true to the art form a little longer. Until going to the movies is completely immersive. In fifteen years a movie theatre is going to be an interactive 3D fully-realized hologram in front of you. You’ll be on those flying mountains, you’ll be able to touch the wing of the Ikran.

Kirill: Will this translate to the TV experience in my living room?

Gmunk: I think the theaters will be fully immersive crazy kick-ass ones, and the living room will be a scaled down version, with perhaps hard-core super gamer types who drop eighty grand on their system to get the full experience.

Kirill: I think it was “The Lawnmower Man” which had this immersive gaming experience. That never happened, right? And that was 20 years ago.

Gmunk: May be i’m getting ahead of myself. But things are getting pretty cool. I think in five years your phone will be able to project something out of it that makes something you can use.

Kirill: This brings me to the topic of futuristic UIs. I’ve been reading Josh Nimoy’s blog that talked about bringing realism into the hackery scenes, showing emacs and proper Unix commands. Do you ever get told to forget about how realistic it will look to the geeks and instead to concentrate on wowing the audience?

Gmunk: What Josh did was awesome. I was talking about purists, and he’s a purist. He’s an artist and you always have to plant these Easter eggs in everything you do. You have to stay true to your roots and plant those little reminders. This is what Josh did, he went old school and did a retro thing that nobody but geeks would appreciate – and that’s what’s important for him.

Kirill: Now that the movie is out on BluRay, do you get emails from those hardcore guys that dissect every single still from the movie?

Gmunk: He probably does, and that’s cool – you’re touching people. The average movie goer is perhaps looking at the surface level, with nothing to really grab them and touch their inner passion. But geeks and people who live for this – they are just more educated fans that appreciate this aspect, and that’s who we care about.

Kirill: Do you need to get an approval from the director – to plant those Easter eggs?

Gmunk: I don’t think the director knew [laughs], but i think that he would appreciate it as much as Josh did. We’re all purists, we’re all indie at heart. When i graduated from college, i preached that i would never do client work in my life. I was completely anti corporate America, a vegan (still am), super against the carnivores, super environmental. So i made a pledge to do what’s right and that quickly faded when i moved to Los Angeles.

Kirill: Somehow you need to pay your bills…

Gmunk: Right. So you always want to throw in the whatever indie respect that you can give. I try to do that a lot with commercials too.

Kirill: Talking about the futuristic UIs. Whenever it gets mentioned, people always seem to be talking about “Minority Report”. It seems that only that movie managed to capture people’s imagination on what the interaction with computers will look like in the future. Do you think that the way we interact with computers every day at work and at home will change?

Gmunk: First of all, the reason why it was the most accurate representation of the UI is simple – it was developed at MIT. They had computer scientists and UI specialists that research this for a living design this interface. Mostly in movies it’s just greeble – just make it look awesome and don’t think about making this a real-life thing. Kind of like “Iron Man 2” – the implementation was gorgeous, and that may be a real-life hologram some day – but did they really thing about usability and would that mirror what would happen soon? The thing about “Iron Man 2” and more so about “Minority Report” is that it’s all gestural. It’s all about controlling things with your hands, interacting with your hands, taking everything off the mouse and the keyboard – which we’re already seeing starting with iPads and iPods and the gaming consoles. That’s the way it’s going, and basically every UI you’ve seen in the last ten years in a movie has been touch-based, with gestural controls. I can’t remember the last UI i saw in a movie that had a mouse.

Kirill: We mentioned “Iron Man 2” a few times. I was reading this article which says that Robert Downey Jr. would improvise his movements on the set, and then the VFX artists would need to build the entire holographic UI around his movements. How was your interaction with Jeff Bridges in the elevator and solar sailor scenes?

Gmunk: We were just given the footage. We weren’t involved in the shoot of those sequences at all. Unfortunately, in the elevator shot his hand is completely out of focus. They shot everything at f/1.4 and everything has a crazy shallow depth of field – and so the graphic had to be out of focus, which was kind of a bummer. The solar sailor had also a shallow depth of field, so that was something that we had to work with to get our renders right.

Kirill: So you got the shots of him staring into empty space and moving his fingers around, and then you built your scenery around that? Is this more interesting than creating your interaction model and asking the actor to somehow work with it?

Gmunk: It’s way better to do it when actors are gesturing in the air and then design around that. It’s easier and also fun and super-inspiring, as long as the performance is good. He didn’t have a lot of definitive motions, with soft gestural commands and not a lot of emphasis, so we made the interface itself react to that and be less complicated. We made it match his gestures.

Kirill: I was thinking about the sci-fi movies that i’ve seen in the last decade or so, and it seems the best of them take place in darker environments – such as Matrix and Tron, of course. Your portfolio would also indicate that you’re moving in this direction. Is it because you can explore the futuristic environments better in an inverted contrast?

Gmunk: I just love dark, the vibe and the evil dark aesthetic. I love the original “Matrix”, it is by far the best sci-fi movie ever made. It had this nasty, really dark vibe to it that was so beautiful. And the plot lines and the language were so deep, intelligent, sophisticated, well thought out and written.

Kirill: Were you disappointed by the sequels, from visual as well as content point of view?

Gmunk: The third one was almost unwatchable. The visuals in the third one were cool, but it was really bad script. Everything every actor said was just a sentence, plus they got away from the charm of the first one where you see all the karate. And by the third one you have seen all the tricks and you didn’t care anymore, it was almost a video game. But the first one was amazing, and it still holds up today.

Kirill: Getting back to Tron and its dark environment. Did you feel restricted by the color space that was at your disposal, with soft blue for the good guys and yellow-oranges for the bad guys?

Gmunk: We got to love the glowing light blue a lot. But that’s just how it was. It would’ve been cool to see some pinks explored – no way it would go through of course. Some really nice cool pink.

Kirill: Did it somehow affect the exploration of different geometrical building blocks?

Gmunk: Not really, because we knew it would never go through. I would throw some yellows, and i think a pink slipped through once in a while – and we were told to change it. But some of the graphics got expanded colors. The board room, for example, got a full spectrum. We also designed the holographic animated rectifier globe in the scene where Clu addresses his army, and there were a lot of reds, oranges and browns there.

Kirill: How did it feel to see your work in the theaters, and all the geeks – including me – swooning over the effects?

Gmunk: It felt really good to see it on the screen with the audio, because we didn’t get hear any of the sound design. So hearing it was really quite a treat, because we didn’t have any creative input whatsoever into that. It was really interesting to hear the bits they added to the stuff that we were doing. You’re animating it and you think about how it will sound and you have ideas, and then they come back with something completely different. It was really fun. So the sounds the scoreboard made, the sounds of the hologram on the solar sailor when you click on it, the sounds of the DNA when it forms, the rings breaking apart in the rectifier extraction. And then hearing the sound track too was a part of it – a lot of the shots that we’ve seen, you’ve already seen on the big screen during the development on the theater screen in Digital Domain. But to hear it, that was a treat, the thing that i remember the most.

Kirill: So what can top your work on Tron: Legacy?

Gmunk: I think that if we’re doing another Tron, i don’t know what kind of charm it would have. But just knowing what we know now, learning from the process of generating these graphics, we can really push the envelope a lot further because we would hit the ground running.  We would be full speed in the first week because we’ve been there and we know exactly what’s coming and we could pull off a lot more kick-ass.

One could only hope that James Cameron is already on the phone, tapping the man to lead the VFX team for the Avatar sequels. I’d like to thank Gmunk for this fascinating peek inside the VFX industry and for the incredible work he and his “black-ops team” did on the movie. Here are the links to their portfolios:

икони