February 25th, 2017

Screen graphics of “Mars” – interview with Nawaz Alamgir

Continuing the ongoing series of interviews on fantasy user interfaces, it gives me great pleasure to welcome Nawaz Alamgir. In this interview Nawaz talks about the beginning of his career doing trailer graphics for feature films, the switch he made to doing screen graphics through a number of self-initiated fantasy UI projects, and the work he did for “Bastille Day” and “Morgan”. And all of that is the introduction to Nawaz’s work on the six-part first season of “Mars” for National Geographic. The show combines documentary-style interviews on the state of the space program in 2017 with a scripted narrative of the first manned expedition to Mars in 2033. We talk about the screen graphics work that spanned over the period of nine months, what went into designing those screens and incorporating them into the narrative on set, as well as in post production.

Kirill: Please tell us about yourself and your path so far.

Nawaz: I left university knowing that I wanted to do something design-related. There were lots of options to choose from, all entry-level web design work and similar things. I ended up getting a job as a web designer at the video games company SEGA. I was there about four years.

Youtube was just started to grow, and back then around 2005 you had to have a really jazzy website. In my third year at Sega I got my hands on a copy of AfterEffects, and we started doing video work, as web sites started becoming video-heavy. I started doing web sites doing animations and video.

I remember when the first “Transformers” came out, and I saw the trailer graphics, and I was amazed. I researched the companies who did that, all of them in LA, as I never noticed that kind of graphics before. I always loved trailers, and I thought it was perfect. Over the next year I taught myself various tricks to be a trailer graphics artist, and then decided that it was time to jump ship.

At that time one of the companies, Create Advertising, opened a branch in London. There were between 20 to 30 trailer houses in LA, and only two or three in London. Long story short, I got a job there, and I’ve spent the next four years doing loads of trailers graphics. My first job was “Black Swan”, and then came “The King’s Speech”. They were just trailer graphics for the local UK market. Sometimes my graphics would go to US, depending on who took the lead.


Trailer graphics for feature films. Courtesy of Nawaz Alamgir.

And during my last year there I noticed that UI in film was kicking in. Mark Coleran was the first time I ever saw UI in film, and I kept on thinking how anybody can be so detailed. Back then there wasn’t as many resources, and I wanted to get into doing that. I knew it was pretty tough to get in.

About three years ago I left my full time job at Create and went into the world of freelancing. It got to a point that even though the company I was working for was good, I did all I could with trailer graphics. It was time for a change. My idea at the time was to work fewer hours, because at that point I had a few deaths around me which put things in perspective. I took off three months, and I’ve spent that time traveling.

I freelanced at loads of different places doing small motion design jobs as well as doing a few film titles sequences for a year, until I decided to start creating some self initiated UI and screen design projects. I wanted to build up some work to show that I could do UI work and also gain some experience from working on projects.

I did a few UI videos and put them on Vimeo, and the one that did well was FUI Echo, getting to 45K views on Behance. At the time I wanted to do two things – end title sequences and UI. They are completely different, but still related.


Self-initiated FUI Echo project. Courtesy of Nawaz Alamgir.

I messaged a couple of studios in London. It was rather generic, saying that I’m not looking for work, but I’m a big fan of your work and this is what I’ve done. And literally the next day I got a message from London based studio Fugitive Studios (who have a great portfolio of film titles) saying that they just got a UI project for a film.

This was for “Bastille Day” which is now called “The Take” because of the terror attacks in France. We did 87 different shots, all in post. Those were really generic CCTV cameras, a screen hacking the bank, CIA computers and that kind of stuff. The director said that he didn’t want generic blue screens for the CIA. He wanted white, but the problem with that is that the screens are lit already going into post, and the actors’ faces are lit blue from the screens. So you kind of have to use blue, and we went with a lighter shade of it.


Screen graphics for “Bastille Day”. Courtesy of Nawaz Alamgir.

It took me five weeks to do these shots, working with other guys at Fugitive. They loved it, and the director loved it, and I think it looked pretty cool on screen when one of the post houses did the screen replacement. That was my first job done, and literally a few weeks later they called me and said that they have another UI job for the film “Morgan”, they said that it’s a small, $5M-budget film in the same vein as “Ex Machina”.

One of the owners would go to speak to the director, and come back describing the idea of a UI that kind of lives inside the CCTV and tracks the footage. There was some UI in the film already, and we were doing the overlays and a few other shots. We did the nano-technology tracking shots, with generically looking stuff.


Screen graphics for “Bastille Day”. Courtesy of Nawaz Alamgir.

Kirill: As we’re talking about “Morgan”, what kind of a brief did you get? Did it talk about who developed the technology – the corporation sponsoring the research or that small group of scientists on that farm, and how that affected the sophistication of the UI?

Nawaz: The only comment that the director said was that it had CCTV tracking, and that he wants it to look cool. We had a rough assembly of the film, and as we watched it, it felt corporate-y and institutional. You’re not really interacting with it.

We knew that the opening shot of the film would be some kind of a UI overlay for a CCTV. The note said that it would have two data sources for their vitals. Morgan’s throughout the whole thing should be calm and stable, while the other one would be spiking red because they’ve been attacked. That was the only brief. We wanted to make it subtle, and a bit futuristic.

This is what we started with. At first I tracked it the AfterEffects tracker, but it didn’t look right. So I literally manually tracked it, and it looks more realistic that way. It’s almost like you’re using face tracking on a phone, where it slides all over the place.


Screen graphics for “Morgan”. Courtesy of Nawaz Alamgir.

Kirill: In a case like this when you don’t have a very detailed brief, did you have multiple iterations with director’s feedback.

Nawaz: Luckily we hit it on version one. The director liked it, and in part it was also about time and budget constraints. It was already at the editorial stage at the time. Another company did the generic interfaces for the computers, and we added another layer of sophistication to it.

There was a scene that shows how the nano-technology works, and we did that. I remember watching a talk somewhere, and that guy specializes in medical CGI stuff. We got him to join us and work with us. I did some tracking, random numbers, emulated uploads. It was a real simple job to do.

Then came the end title sequence for which the director sent a single image of a DNA strand, and he said that he would like the entire sequence to be like that. His initial idea was to get the DNA samples of all the actors and put them on screen, but there was neither time nor money to do that. So we just faked it. Each card had its own DNA strand on it, and as the time continues, it loads different things. It was quite simple end title sequence. It wasn’t my first one, but it was my first major one. I was quite happy with it when I saw it in the cinema.


Screen graphics for “Morgan”. Courtesy of Nawaz Alamgir.

Continue reading »

February 23rd, 2017

Releases 2017.H1

Today is the day a bunch of my long-running Swing projects get officially back to life. If you’ve missed the previous post from late last year, those of you who are still in business of writing Swing applications might be interested to take a look at the latest releases of Substance, Flamingo and Trident.

Substance is a versatile, extensible and fast look-and-feel that brings a wide variety of lovingly tailored skins, as well as quite a few behavior augmentations to Swing applications. The major themes of release 7.0 (code-named Uruguay) are support for high DPI monitors, reduction of visual noise and improved visual consistency across all core skins.

Substance 7.0 also has three new skins. The first one is Cerulean skin which was part of the Insubstantial fork that was maintained by Danno Ferrin:

The other two were added to the Graphite family. The first one is Graphite Gold that uses gold highlights on active elements:

The second is Graphite Chalk that has increased contrast on all UI elements:

Flamingo is a component suite that provides a Swing implementation of the Office 2007 ribbon container and related components. All Flamingo components have been streamlined to look well under Substance skins, including full high DPI support. Flamingo’s latest release is 5.1 code-named Jileen:

Finally, Trident is the animation library that powers all the animation in both Substance and Flamingo. As this library does not have any user-facing features that directly touch pixels, there is no new functionality in this release. Much has changed since the last time I’ve worked on Trident, and the time has come to remove built-in support for both SWT and Android. With release 1.4 (code-named Enchanted) Swing is the only UI toolkit supported out of the box.

What is next for these libraries?

As the title of this post suggests, I am planning to do two releases a year. What is on that roadmap? I’m not going to make any strong commitments, but these are the rough areas for the next few releases:

  • There once was time where I’ve hoped that other look-and-feels would adopt some of the pieces of Substance. That time is long gone, and splitting the functionality across multiple pieces is just overhead. The relevant functionality from both laf-plugin and laf-widget projects is going to be folded back into the Substance code base.
  • It is highly likely that I’m going to move Substance away from “seamless” discovery of plugins based on classloader magic. For example, if you’re using Flamingo in your application, you will need to declare an explicit plugin initialization along with setting Substance as your look-and-feel.
  • Speaking of Flamingo, I’m going to focus exclusively on how those components look and behave under Substance. Third party look-and-feels are not what they used to be. It’s just not worth my time any more.
  • Having said that, there’s much work to be done in Flamingo to provide full support for high DPI monitors. This is the place to follow that work.

So, if you still find yourself writing Swing applications, I’d love for you to give the latest release wave a try. You can find the downloads in the /drop folder of the matching Github repositories. All of them require Java 8 to build and run.

 

February 22nd, 2017

Production design of “Black Mirror” – interview with Joel Collins

“Black Mirror” is a show like no other. It’s an anthology set in a world just around the corner, a world that is at times soothingly tranquil, and at times achingly terrifying. It’s a world that shows the great promise of technology, and is not afraid to explore the dark corners of how that technology can undo the fabric of our daily social interactions at work, with friends and within family.

Last year I interviewed Gemma Kingsley about the screens that exposed that technology in the first two seasons of the show. Today I am quite honored to talk with the production designer of “Black Mirror”, Joel Collins. Along with his company Painting Practice, Joel has been with the show since its very beginning. We talk about the beginning of his career, his love of all things physical, how “Black Mirror” started and how it evolved when it transitioned to Netflix, building a universe that spans the different storylines, and working with multiple directors across the arc of each season.

Kirill: Please tell us about yourself and your path so far.

Joel: Back in the 80’s I began to train as a fine artist. Before I started that degree, I made a little book of cartoon illustrations. Somebody took that book to an animator and they offered me a job on an unknown project called “Who Framed Roger Rabbit”, incredibly I turned it down, saying that I wanted to be a fine artist! Of course, as we all know, it turned out to be a seminal live action animation movie and when I saw the finished thing I knew I wanted to work in film.

I started out at Henson’s, working on creatures for a variety of productions. Quite rapidly I grew frustrated with being an artist and not being in control. Five years later, as I was working on “Lost in Space”, I had an offer to leave and work as a designer on music videos. That’s when my collaboration with Garth Jennings and Hammer and Tongs started.

We eventually did “Hitchhiker’s Guide to the Galaxy” and “Son of Rambow” together.  I’ve always had an interest in other areas too, be it puppets, creatures, animatronics or special effects.

It seemed that I was always getting involved with projects that didn’t feature just standard set design. That’s one of the reasons why “Black Mirror” was so attractive to me as it gave me the chance to do a huge array of stuff, a lot of it actually inventing things and making 3D objects. I’m currently on the fourth season. So far my company Painting Practice has done three seasons and worked on the title sequence, the motion graphics, visual effects design, VFX shots, conceptual production design, product design – it all adds up to a very holistic creative view.


Concept artwork for “Nosedive” episode of season 3, courtesy of Painting Practice.


Graphics for “Nosedive” episode of season 3, courtesy of Painting Practice.

Kirill: Going back to things that you’ve worked on – puppets, creatures, animatronics – those have been replaced almost entirely by digital effects.

Joel: It’s a lost art, and I trained in it just before it was lost! [laughs]. What I learned about live action, when I started out 25 years ago, is the same thing that helps me do VFX today. I often find myself working with 3D artists who pretty much live on the computer and don’t have a concept of the real world. They are happy to do it all in pixels, whereas I really enjoy actually making a prop, I love the physicality of it.

It’s the same as when you sculpt a puppet or a creature. If you do a drawing of a creature, it’s obviously two-dimensional. The next step is to make it in clay, very much as you would take it into ZBrush today. It was the same thing with set design. You sketched it, constructed a cardboard model and showed everybody.

Today it’s all in 3D, but the fundamentals are still the same. If you do a drawing, you can’t tell if it’s a 6-foot or a 10-foot door. It’s only when you make it that you understand the scale. But when 3D was starting, people would quite often do 3D sets or spaces without understanding the real world and its issues.

Kirill: For me as a viewer does it really matter if it’s done using a physical model or in pure digital environment, as long as the universe I’m seeing is seamless and believable?

Joel: We are telling stories. If it’s a really engaging story, the character can be as simple as a sock with a couple of buttons for eyes. Ultimately, whatever tool you have, simple or complex, if you use it properly you will get a great result. If it’s a great story, all you need is that sock, a couple of eyeballs and a great voiceover.

Humans are great at filling in the gaps. We listen to stories on the radio, and ultimately we are using our imagination no matter how much information we’re given. We piece together the bits that are missing.

On “Black Mirror” it’s all about a moment in a human’s life, or the emotion that characters go through. Essentially, human nature meets machine, meets technology. We don’t need to compete with big budget movies to achieve that. Get the story right, capture the look of the ‘world’ in which the characters live, which in this show is often a close match to the world we all know, and you are good to go.


Concept artwork and graphics for “Nosedive” episode of season 3, courtesy of Painting Practice.


Concept artwork for “Nosedive” episode of season 3, courtesy of Painting Practice.

Continue reading »

February 21st, 2017

Cinematography of “Legends of Tomorrow” – interview with Mahlon Todd Williams

Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome Mahlon Todd Williams. In the last thirty years Mahlon has worked on a wide variety of feature and episodic productions, most recently on CW’s “Legends of Tomorrow”. In this interview we talk about his background and how he started in the camera department, the industry transition to digital, his work on music videos, the compact schedule of episodic television, and what goes into creating the visual worlds of this show.

Kirill: Please tell us about yourself and how you got into the industry.

Mahlon: I go by my full name, Mahlon Todd Williams. I’m a cinematographer, and I’ve been working in the camera department since the late ’80s.

After finishing film school in Montreal, I came back to Vancouver and got into the union, into what was called the camera trainee program. During two years there they teach you how to be a 2nd assistant camera or the clapper loader. I did that for about 10 years, hoping to work with people who were already established cinematographers. I thought that was the best way, to work with them and see what they do, and what their process is. It’s nice to see the final product, but there’s always that mystery about you actually get to that final product. A lot of people start with the same elements, but they end up with a different movie and look. How do you get there? How are you able to control the elements that you are able to control?

A friend of my dad was a designer on set. I went to him a couple of times to get some advice about getting into the industry. He didn’t know anything about the camera department, but his main piece of advice was that whatever it is that I want to do, find the people that are at the top of their game in the industry, and figure out the way to train with them. That’s where you learn your art and craft. And that’s basically what I did.

I was able to get in. I worked on some feature films, TV shows, and commercials. I started to segway into doing commercials because my ambition from the very beginning was always to become a cinematographer. When I was working as an assistant, I kept on shooting stuff on the side, and using things that I’ve learned from other people on my own projects. Commercials take anywhere between a day and two weeks if it’s a big one. You can disappear for a week or two between working on them, shooting a short film or an indie feature or a music video. I kept on bouncing between them, and it was a great way for me to continue building my resume and reel.

Eventually it got to the point where I had enough credits, and I started getting phone calls for jobs. After about ten years as a camera assistant, I stepped away and fully started working as a cinematographer, in the union at least. That’s what I’ve been doing since around 2006. But I’ve been shooting stuff since the late ’80s.

Kirill: Would you say that these days there are more opportunities to get into the field? There are so many indie features and shorts, and there’s such a variety of episodic television compared to when you’ve started.

Mahlon: Yes and no. I think there’s a lot more content that’s being generated. And camera-wise, there’s definitely a lot more choices. You can shoot with your iPhone, with a DSLR, with a Red or an Alexa, or a combo. When I got into the field as a clapper loader, it was all film. I learned how to load film and how to thread cameras up; that was my training. If you wanted to work on a feature or on a TV show, there was a fair amount of work in Vancouver at that time. There was a bit of a boom in the ’90s when I rolled back into the town.

There wasn’t a tier system back then. These days some shows are shot on DSLRs. We’re shooting “Legends of Tomorrow” on Alexa. In the last four years I’ve shot a bunch of TV movies and music videos on the Reds.

Going back to your question, there is a fair amount more. When I got back into the town, one of the first jobs that I got was with a company that shot karaoke videos. At the time, in the mid-90s, they were not shooting on film. We were shooting on a Beta SP camera, and that’s where I really started to learn how to light and shoot and control the elements for video. I was training on film on bigger shows, and back at that time it was a bit funny.

People that I’ve worked with on those shows thought that I was crazy for working on stuff that was shot on video. That was the beginning of HD, and it was coming. A lot of people in the industry though that it was never going to take over, and that film was here to stay, and that there was no use to even attempt to figure out how to do that. For me, it was a great break to actually light and shoot something in general.

It was a format that was a lot harder to make look pretty, compared to Red or Alexa right now. It wasn’t very forgiving. There wasn’t 24p sort of gloss to those camera. You had to work really hard to control daylight. If you shot in the studio or at night, it was a lot easier to make it look good. But as soon as you had to go outside during the day, you really had to figure out the elements that you wanted to put in the shot, the lenses, the diffusion, the smoke – every trick in the book to try and make that format feel closer to film than it actually was.

Kirill: Where you surprised how quickly digital took over, and how quickly film folded, at least at the scale of the cinema history in general?

Mahlon: Yes and no. If you look at it from the financial perspective as a producer, it made total sense that it would go that way. Some of the elements of making a show are actually still easier to do on film, in my view. You don’t have a data management cart. You can run around with a film camera, and as long as you have some place to change mags, you don’t a lot of elements. You don’t need a lot of power apart from the battery on the film camera. There’s a little bit more machine for digital that you have to move around.

There are other things. You can play back the shots on digital, and if you’re not 100% sure that you got a shot, you play it back and check the focus. You check if there’s a boom in the shot – and those elements are fantastic. There’s definitely something fantastic to it. It’s easier to walk away at the end of the day knowing that we’ve got the shot and that it’s usable.

On film you may be pretty sure that you got it. The operator and the director are looking at the performance, and they are all happy with that. But until it comes processed from the lab, you don’t know if it has a scratch on one of the frames, or if something has happened during the processing of it. There’s a million things that could have happened, and you won’t find out for a day or two.

On “Legends of Tomorrow” we actually shot one of the episodes with a Super 8. It was a lot of fun, and it’s been a while since I’ve done that. We’re shooting in Vancouver, so we bought the film in LA, waited for it to be shipped, shot it, then shipped it back to LA, and then had it processed. It took us three days to see the dailies of what we had shot. Normally, if we shoot until midnight, by eight in the morning we’re getting a still frame from the lab already showing us the shot that we’ve got. And by around ten in the morning, you’re already looking at the dailies from the previous night.

When you’re working with film, you have to be 100% sure before you start shooting that everything is ready. There’s a whole process to get ready to do a shot. All the departments know that you have one or two takes on film, while on digital if you really need to, you can keep the cameras rolling a bit more. And that happens to be the case now. You can roll into a take that is ten minutes long, but if you did that on film, that’s a thousand feet of film. That’s a lot of money to buy it, and then process and transfer.

Kirill: What about the visual quality of digital? Has it caught up to film?

Mahlon: I’d say for the most part it has. There are certain lighting situations, like daytime exteriors that depend on the lens and how wide it is. I still find that film has a slightly different feel to it in those situations.

We had shot the Super 8, and it took three days to get it back. We bought it because we really wanted it to have this film look.  And when you see Alexa footage beside the film footage on small screens, the color and the contrast are very close. It was really hard to tell which was which. We shot some stuff at 200 ASA and we shot some stuff at 500 ASA. That’s where you really saw the difference in grain between Alexa and Super 8.

If it was on a bigger screen, and you were looking at an inter-cut, I think you would be able to tell the difference. But a quick look at it showed a pretty close match. It was closer than we thought it would be. We didn’t actually have the time to do a test. We shot it on the day, and when it came back, we looked at it. There are some tests on the website of the company that sold us the film, and we based our estimates of how much grain we would be getting on those tests. I was surprised that it was so close between the two, and we had to push the film even a bit more to give it more of a jump between that and the Alexa.

It was interesting to see how strong Alexa is, and how far digital has come that it does give you that feel.


Courtesy of CW’s “Legends of Tomorrow”.

Continue reading »