April 14th, 2017

Suspension of disbelief – interview with Derek Spears

Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my delight to welcome Derek Spears. Since joining the Rhythm & Hues Studios about fifteen years ago, Derek has worked on feature films such as “Red Riding Hood”, “The Mummy: Tomb of the
Dragon Emperor,” “Superman Returns” and “X-Men: Days Of Future Past”. Most recently, his work as the Visual Effects Supervisor on HBO’s magnificent “Game of Thrones” has won back-to-back Emmy awards for Outstanding Visual Effects.

In this interview Derek talks about the early days of digital visual effects [VFX] and the evolution of the tools since then, the human-intensive parts of the VFX pipeline and his thoughts on what may happen in those areas in the next few years, the discussions that take place between the art department and the VFX department on building things physically, digitally or as hybrid, and the never-ending quest to take the audience on a wild ride of visual storytelling magic. The second part of the interview is all about the last two seasons of “Game of Thrones”, from creating the fire-breathing dragons to meticulously crafting the more “invisible” parts of Westeros and Essos, such as crowds and digital buildings.

Kirill: Please tell us about yourself and how you started in the field.

Derek: My background is in software engineering. I worked at Silicon Graphics for a few years in the apps development group. During my time there I worked with Kodak to help develop their compositing system, and after that I joined Cinesite to help them start their 3D graphics department. After that I worked at Digital Domain, and now I am a Visual Effects Supervisor at Rhythm & Hues on features and episodic shows. There I’ve worked on “Game of Thrones”, “Black Sails” and “Walking Dead” in the episodic world, and on features such as “R.I.P.D.”,  “The Mummy: Tomb of the Dragon Emperor,” “Superman Returns” and others.

My background is in computer graphics and the technology world, but I’ve always enjoyed the intersection of art and technology that visual effects has provided.

Kirill: Would it be correct to say that you’ve joined the field when digital was starting, and there were a lot of special effects with animatronics and other physical approaches?

Derek: Digital was still in its ascendancy back then. People were still trying to understand how it worked. It was very much the Wild West, the frontier type of landscape at that point in time. There were no well-developed ideas that we have now, like pipelines, render cues and various approaches and methodologies. We were learning at the time, and it was very interesting to be a part of that.

Kirill: Do you remember how it felt on your first few productions, as you “lifted the veil” and saw how the magic of movies looks like on the inside?

Derek: It was interesting to learn how a production was done. Coming from an engineering background, filmmaking was a new thing for me. The interesting thing about those early days was that every new show you got to, you had to sit down and decide how to do it. There was no well established methodology.

We didn’t typically use ray tracing. RenderMan was very popular at the time, and there were some other choices – such as Softimage and Mental Ray. The choices then tended to be non-homogenous. The options now are more developed, but also more similar in their approach to solve the problem. It was a bit more interesting in the early days. You had to get out there and figure out how to do things.

It was a much bigger chance of failure because it hadn’t been done before. Now if somebody asks you to do something, the chances are that you’ve done something similar to it, and that it’s going to fit into the existing pipeline. We have tools for various situations, and that is not true for the early days.

Kirill: Looking at the evolution of hardware and software at your disposal, would you say that they’ve kept up with demands from the production side to keep on increasing the level of sophistication of the worlds you’re building?

Derek: There are a couple of interesting things here. There’s always the attempt to decrease the amount of human labour in any given problem. Lighting is the one that has made the most advances out of the many disciplines. It used to be a very difficult and time-consuming problem. Now you can capture light on set and then quickly recreate a very close approximation of it in render without a lot of experimentation. You still have to do a lot of work for modeling and texturing as the setup, but the actual lighting process is rather quick.

Animation and motion capture has helped, but I don’t think that it has taken a lot of labour out of animation. There are tools that are better at rigging, but it’s still an extremely labour-intensive process. The same goes for compositing. There’s a much larger toolset to make things work a lot better in a lot more difficult situations, but what we find is that complexity of the problem matches the efficiency of the tools. For everything that gets faster, the complexity is increasing. It used to take 12 hours to render a chrome sphere and now it takes 12 hours to render a dragon. It’s the same time to render, but you get a lot more for it.

I don’t think that things ever get simpler. It’s rather that the available resources increase the complexity.


Progressive layers in VFX for “Game of Thrones” by Rhythm & Hues.

Kirill: Now that there’s so much being done in the digital pipeline of VFX, is there any friction between the art department that lives in the world of physical sets, and the VFX department that is creating all these fantastical sets in pixels?

Derek: In many ways, the art department and the VFX department are quite complimentary. What the art department designs, the VFX department has to realize. In some places we’ve seen crossovers, like on “Oblivion” where the art director came from the VFX background. I haven’t seen any tension between the two departments on the productions I’ve been involved with. It’s always very cooperative, to the point of how can we involve the art department to help design and solve some of the VFX problems.

In fact, it gets so close on some of the shows. On one of the shows that we’re working on right now, we’re acting as part of the art department to help design creatures. I view it as a synergistic relationship.

If you’re talking about trade-offs, one of the things that you have to figure out is that everything costs money to do. When you design a big set of platform for your show to work on, one of the things that the two departments have to work out is what is the most cost-effective way to do that. How much are we going to do practically? How much are we going to do in VFX? In no way is that a combative relationship. Together we figure out the best way to not break everybody’s budget. Usually that works very nicely.


Progressive layers in VFX for “Game of Thrones” by Rhythm & Hues.

Continue reading »

April 9th, 2017

The art and craft of color grading – interview with Norman Nisbet

What happens with the raw footage captured by the camera lens after the last “Cut!” sounds off on the last day of shoot? It passes through a lot of hands and eyes until we get to see the final version on our screens. Post-production includes cutting and editing the shots, sound editing, dialogue replacement, foley effects, sound effects and soundtrack, just to name a few. Color grading is an integral part of the post-production process, and is mainly composed of two parts – color correction and artistic color effects.

This is where the shots of the same outside scene taken under different natural lighting conditions (sunny / cloudy) are made to look visually consistent as if they were shot as one flowing sequence. This is where the existing footage gets its colors shifted to change how the viewers will subconsciously interpret the mood of that scene. This is where the magic of visual story telling gets shaped to the final intent of the director.

Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome Norman Nisbet. In the last couple of decades he has worked on a wide variety of productions in film, television, documentaries, commercials and music videos. Do colors carry universal meaning across different cultures in our increasingly more connected world? How do you tackle the limitations of much smaller color spaces on screens? How much have the software tools have advanced in the last decade or so? What went into color grading the wonderful world of “The Neon Demon”? And much more importantly, will we ever agree on the right spelling of the word “colour”?

Kirill: How did you get into the field?

Norman: I stumbled onto it, really. At that time I didn’t even know colour grading existed as a field. I knew about editing but never thought of the colour grading part. I always thought that was for stills photography.

I was working for Multivisio Holdings – an audio-visual company in Jhb, South Africa, doing video productions for the car industry (helicopter shoots etc), as well as staging and engineering shows and car launches. It’s much like the show business and everything that comes with it. At some point the company bought the first component digital editing suite in the southern hemisphere and an URSA GOLD with a Pandora Pogle to go with. The American operators were great guys and they were eager to teach me.

I was in luck! I was editor’s assistant for 6 months then a Telecine assistant for 6 months. Then I had to choose as they were hiring, and I chose the film/colour grading world. I was always painting and drawing, spending school holidays at that company to mount slides from car shoots for multiple projector slide shows. I was always drawn not to photography specifically but image making, if I may call it that.


Norman Nisbet’s work on Mena Maria’s “F*#$ You”.

Kirill: What is color grading? What it is that a colorist does on a feature film or a TV show?

Norman: Colour grading is enhancing the raw image that was shot on film or digital media to present to an audience on different formats. It is the bridge between the camera and the screen. There is a technical translation aspect to it, as well as allowing for a creative side of the process. The cinematographer has a chance here to enhance the image for the audience to enjoy the intended vision of the film or program. Colour grading has replaced the ‘colour timing’ process that was previously done in a film laboratory.

The role of the colourist is to help enhance that image and to be the cinematographer’s ‘hands’ by understanding his vision and making sure the audience will see the program or film in the way the cinematographer intended it be viewed. The colourist has a responsibility to convey the cinematographer’s intent. Obviously we (colourists) give creative input and have the technical know-how to carry out this process.


Norman Nisbet’s work on Medina’s “We Survive”.

Kirill: How do you describe what you do to people outside of the industry?

Norman: I describe it as photoshopping a moving image. Or nowadays as a ‘filter’ applied in Instagram. I enhance images to portray a mood to help the story. I create warmth or cold. Animosity or sincerity. A menacing darkness or a pastel universe. I carry the viewer through the scenery in a subliminal manner. I invoke feelings as the viewer watches the images. There is always the technical side too: matching scenes and shots as they are shot at different times or different lighting. Even making day time look like night time.


Norman Nisbet’s work on Medina’s “We Survive”.

Kirill: Is there such a thing as a universal “dictionary” of meanings to color choices to convey mood and emotion?

Norman: There are, of course, studies which show how a human body reacts to certain colours. Red raises the heartbeat, blue is calming. Mental triggers such as green symbolises growth but green is also jealousy and envy. So these colours can be challenges or to use as tools. Are nights blue?

Kirill: What are your thoughts on a much smaller color space on screens compared to real life?

Norman: Human eyes and the messages relayed to the brain are amazing! The amount of colour balancing that goes on in a split second! Looking at a smaller screen limits this spectrum. You cannot perceive the range of blacks so the picture is always more contrasty. In real life there is no true black or white, there are too many reflections bouncing around. You can almost touch colour if you look properly.

Any screen cannot truly show real life and the smaller the screen, the more distant the viewer – so a more contrasty picture will engage the viewer’s interest. Image compositions are clearer too. I feel it’s a pity that cinemas are losing popularity to home viewing or even on-the-go streaming for commuters, even though it is very convenient. There is still something to be said for the ‘cinematic experience’ as in ‘in a cinema!’


Norman Nisbet’s work on Doctors Without Borders.

Continue reading »

April 2nd, 2017

Production design of “Nocturnal Animals” – interview with Shane Valentino

Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome Shane Valentino. In the last fifteen-odd years he’s been working on a variety of productions in music videos, commercials, TV shows and feature film world. In this interview Shane talks about the differences and similarities between these fields for the art department, treating every production that does not happen in the present day as a period one, the art of conveying emotions and feelings in the visual medium of film, and the changes in the world of cinematic story telling on our screens.

The interview centers on two striking movies that Shane has worked on as the production designer in the last few years. The first is “Straight Outta Compton” that captures the formation and evolution of the music group “N.W.A.” and the effect it had on both the music industry in the late ’80s, as well as the American society at large. The second is the impeccable “Nocturnal Animals” that weaves three stories, three worlds and three visual universes in one.

Kirill: Please tell us about yourself and your path so far.

Shane: I’ve been fascinated with cinema since I was a teenager. I started taking film criticism and film history courses in high school in Los Angeles. Our instructor, Jim Hosney, was teaching surveys of American and European cinema and that’s where I was exposed to the films of Jean-Luc Godard, Bernardo Bertolucci, Michelangelo Antonioni and others. Seeing those films, and being exposed to the themes that those filmmakers were exploring, made me even more interested in pursuing filmmaking as a career.

When I went to college, I wanted to study film as a fine art. I was looking for ways to express ideas and themes through non-commercial means. There’s a whole genre of avant-garde and experimental filmmaking and that’s where I got hooked. My mentor was a woman named Chick Strand who was pretty well-known in that world. Eventually, I went to the San Francisco Art Institute to get my MFA in experimental filmmaking. That program rounded out my education and exposed me to even more disciplines – photography, sculpture and painting.

I entered production design through a stroke of luck. I had a friend who was an artistic director on a TV show and she needed some help. I didn’t know much about what the art department did when I started working with her. But I was intrigued by the whole process and I’ve been working in the art department ever since.


Design boards for “Straight Outta Compton”. Courtesy of Shane Valentino.

Kirill: So your start was in the TV world.

Shane: Yes. It was with the Oxygen network started by Oprah Winfrey. They were doing a lot of TV shows and that’s how I got introduced to the role of the art department. My first extensive project was with Isaac Mizrahi’s show on that network.

Through TV, I started doing commercials. I was always interested in working on feature films too. Living in NYC after finishing my graduate degree in San Francisco, I was introduced to the independent New York film scene. I started on indie films with super small art departments and budgets under $1M. It was a fantastic learning experience to see the different parts of the art department coming together on a relatively tiny budget – props, set decoration, construction, etc.  From there, I moved on to larger-budget film projects.


Design boards for “Straight Outta Compton”. Courtesy of Shane Valentino.

Kirill: You’ve been working on music videos, commercials, TV shows and feature films. How would you compare these fields as far as the size and pace of the art department?

Shane: It depends on the budget of the project. For example, the art department for a big commercial can match what happens on a feature film or a TV project in terms of size and pace. Between film and TV, it’s usually about the same in terms of the department heads and the general size. On commercials we don’t necessarily have a construction department because it’s often outsourced to a vendor who can build aspects of the sets. But you definitely have the set dressing department, the props department, and the art department on every commercial.

I find that the difference between film and TV is mostly about pace. The TV format requires that you finish an episode in 7-8 days. As a production designer, you have to think on your feet to make these tight deadlines. You have to make decisions about how something should look as expeditiously as possible. You also learn to find locations that can accommodate a team that needs to work very quickly.

Pace is one of the reasons why a lot of us like to work on films. It depends on the budget and the amount of prep time of course, but you often have more time to create a concept or a theme, and to work through how those ideas can be fully articulated. You’re given the time to think through all the different aspects.


Design boards for “Straight Outta Compton”. Courtesy of Shane Valentino.

Continue reading »

March 29th, 2017

How deep the rabbit hole goes

Back in the olden days of 1999 it was pretty much the only movie that I watched in the theaters. In pre-digital days it took a few months for a movie to complete its theatrical rollout across the globe, and once it got into theaters, it stayed for much longer than it does these days. Such was the story of “The Matrix” for me. It stayed in local theaters for at least six months, and I was a single guy with not much to do in the evening after work. So every week, at least twice a week, I would go to watch it again. And again. And again. It’s quite unlikely, in fact, that there’s ever going to be a movie that I’ll watch more times than I’ve watched “The Matrix”.

Back in those olden days, people didn’t wake up to write a new Javascript library. People woke up to write a Matrix rain screensaver. Those would be the mirrored half-width kanas, as well as Latin characters and arabic numerals.

A few years later, “Matrix: Reloaded” came out, taking the binary rain into the third dimension as the glyphs were sliding down multiple virtual sheets of glass. And I finally decided to dip my toes into the world of making my own Matrix rain screensaver, complete with many of the visual effects that were seen in that movie. There’s a bunch of old code that I’ve uploaded as an historical artifact right here. Fair warning – this was 13 years ago, and as many do when they first start out, I reimplemented a bunch of stuff that was already there in the JDK. If you dive into the code, you’ll see a home grown implementation of a linked list, as well as a rather gnarly monstrosity that exposed something that resembled a canvas / graphics API. Don’t judge me. Anyhoo, on to the topic of this post.

One of the things I’ve wanted to do in that screensaver was to take a string as input and render it in the style of Matrix titles:

In here, every glyph undergoes one or more transformations (cropping, displacement, segment duplication). In addition, there are connectors that link glyphs together. It is these connectors that I’m going to talk about. Or, more precisely, how can you come up with the “best” way to connect the glyphs of any input string, and what makes a particular connector chain the “best” chain for that string?

This image captures the “essence” of quantifying the quality of a connector. In the title sequence of the original movie, as well as the sequels, the connectors are only placed at three vertical positions – top, middle and bottom. That is the starting point of this diagram. In addition, there are the following factors at the level of an individual glyph:

  1. On the scale from 1 to 5, how far the connector would have to go “into” the glyph to connect to the closest pixel? So, the bottom part of A gets 5’s on both sides, and the top part gets 2’s on both sides. The middle part of J gets 0 on the left (as the connector would have to “travel” across the entire glyph) and 4 on the right (as the connector would need to go past the rightmost point of the top serif).
  2. Defining a “natural” connection point to be (in the diagram above green marks such a point while red signifies that the point is not natural):
    • Anything on top and bottom – this is an escape valve that would make sure that any input string has at least one connector chain
    • Serifs in the middle – such as the right side of G
    • Crossbars in the middle, extending to both sides of the glyph – such as A, B or R.

Then, a valid connector chain would be defined as:

  1. No two consecutive connectors can be placed at the same vertical position. In the example of the original title, the connector chain is top-bottom-middle-bottom-top.
  2. A connector must have positive (non-zero) value on both sides. For example, you can’t connect A and J in the middle because the left side of J places value 0 on the middle position.
  3. A connector must have at least one natural connection point. For example, N and O can’t be connected in the middle, while N and P can (as P’s left side defines the middle position as a natural connection point)

Finally, the overall “quality” of the entire connector chain is computed as:

  1. The sum of connection point values along both sides of each connector
  2. Weighed by the chi-square value of the connector vertical positions along the entire chain
  3. Weighed by the mean probability of the connector vertical positions along the entire chain

The last two factors aim to “favor” chains that look “random”. For example, there is not much randomness in a top-bottom-top-bottom-top-bottom chain. You want to have a bit of variety and “noise” in a chain so that it doesn’t look explicitly constructed, so to speak. As can be seen in the diagram above, the middle vertical position is not a natural connection point for a lot of glyphs, and both of these factors aim to bring a well-distributed usage of all vertical position into the mix.

It is true that the basic underlying rules of defining how a glyph connector chain is constructed are based on the visuals of the Matrix movie titles. You might think of this as the basic rules of physics that apply to the particular universe. However, the evaluation of a specific constructed chain is a softer framework, so to speak. There is nothing explicit in these rules that would force the quality score of the particular connector chain that you see in the final graphics for these particular six letters to be the highest of all valid chains.

When I first ran the finished implementation, it was one of those rare moments of pure, unadulterated geek joy:

These are all possible valid connector chains for the word “MATRIX”, ordered by the quality score that is based on values of individual connector points, as well as statistical variation that accounts for predictability and randomization within a specific sequence. Yes, the top score goes to the sequence that was used in the movie title!

Let’s look at “RELOADED” next:

And these are the top 39 valid connector chains for that word:

While my algorithm found the perfect match for “MATRIX” connector chain, the connector chain that was used in the movie for “RELOADED” is scored at place #37. You can see where it falls flat – in the top connector between L and O. The score value for top connector on the right side of L is 1 out of 5, and while the score value for top connector on the left side of O is 5 out of 5, that drastically lowers the overall score. In addition, the last four connectors are bottom-middle-bottom-middle which lowers the median probability factor applied to the entire quality score of this chain.

The connector chain selected by the third movie for the word “REVOLUTIONS” is not considered a valid one based on the rules that I chose after “Reloaded” was out. Specifically, the middle connector between U and T is not valid, as there is neither a serif not a crossbar in these two glyphs. And the same applies to the middle connector between I and O.

Finally, the “ANIMATRIX” title deviates slightly in the “MATRIX” part, using middle connector placement between M and A. How did my algorithm fair on scoring this chain?

This was a close one. The connector chain used in the movie title scores at the second place, with the only difference being in the very first connector (top instead of middle).

It’s hard to quantify artistic choices, and I don’t presume to claim that the top-scored connector chain for “RELOADED” based on the rules of my algorithm is clearly superior to what ended up in the actual movie titles. Would it be worth to tweak the scoring system? I don’t think so. There are a couple of noticeable “week” connectors in the connector chain in the movie title, and relaxing the scoring rules would only introduce more randomness into the process without necessarily bumping up that chain up the ranks.

Perhaps the artistic choice of choosing a long top L to O connector was based on introducing a bit of variance and randomness into the mix. Or perhaps I should check to see who was in charge of the title graphics and ask them :)