February 22nd, 2017
“Black Mirror” is a show like no other. It’s an anthology set in a world just around the corner, a world that is at times soothingly tranquil, and at times achingly terrifying. It’s a world that shows the great promise of technology, and is not afraid to explore the dark corners of how that technology can undo the fabric of our daily social interactions at work, with friends and within family.
Last year I interviewed Gemma Kingsley about the screens that exposed that technology in the first two seasons of the show. Today I am quite honored to talk with the production designer of “Black Mirror”, Joel Collins. Along with his company Painting Practice, Joel has been with the show since its very beginning. We talk about the beginning of his career, his love of all things physical, how “Black Mirror” started and how it evolved when it transitioned to Netflix, building a universe that spans the different storylines, and working with multiple directors across the arc of each season.
Kirill: Please tell us about yourself and your path so far.
Joel: Back in the 80’s I began to train as a fine artist. Before I started that degree, I made a little book of cartoon illustrations. Somebody took that book to an animator and they offered me a job on an unknown project called “Who Framed Roger Rabbit”, incredibly I turned it down, saying that I wanted to be a fine artist! Of course, as we all know, it turned out to be a seminal live action animation movie and when I saw the finished thing I knew I wanted to work in film.
I started out at Henson’s, working on creatures for a variety of productions. Quite rapidly I grew frustrated with being an artist and not being in control. Five years later, as I was working on “Lost in Space”, I had an offer to leave and work as a designer on music videos. That’s when my collaboration with Garth Jennings and Hammer and Tongs started.
We eventually did “Hitchhiker’s Guide to the Galaxy” and “Son of Rambow” together. I’ve always had an interest in other areas too, be it puppets, creatures, animatronics or special effects.
It seemed that I was always getting involved with projects that didn’t feature just standard set design. That’s one of the reasons why “Black Mirror” was so attractive to me as it gave me the chance to do a huge array of stuff, a lot of it actually inventing things and making 3D objects. I’m currently on the fourth season. So far my company Painting Practice has done three seasons and worked on the title sequence, the motion graphics, visual effects design, VFX shots, conceptual production design, product design – it all adds up to a very holistic creative view.
Concept artwork for “Nosedive” episode of season 3, courtesy of Painting Practice.
Graphics for “Nosedive” episode of season 3, courtesy of Painting Practice.
Kirill: Going back to things that you’ve worked on – puppets, creatures, animatronics – those have been replaced almost entirely by digital effects.
Joel: It’s a lost art, and I trained in it just before it was lost! [laughs]. What I learned about live action, when I started out 25 years ago, is the same thing that helps me do VFX today. I often find myself working with 3D artists who pretty much live on the computer and don’t have a concept of the real world. They are happy to do it all in pixels, whereas I really enjoy actually making a prop, I love the physicality of it.
It’s the same as when you sculpt a puppet or a creature. If you do a drawing of a creature, it’s obviously two-dimensional. The next step is to make it in clay, very much as you would take it into ZBrush today. It was the same thing with set design. You sketched it, constructed a cardboard model and showed everybody.
Today it’s all in 3D, but the fundamentals are still the same. If you do a drawing, you can’t tell if it’s a 6-foot or a 10-foot door. It’s only when you make it that you understand the scale. But when 3D was starting, people would quite often do 3D sets or spaces without understanding the real world and its issues.
Kirill: For me as a viewer does it really matter if it’s done using a physical model or in pure digital environment, as long as the universe I’m seeing is seamless and believable?
Joel: We are telling stories. If it’s a really engaging story, the character can be as simple as a sock with a couple of buttons for eyes. Ultimately, whatever tool you have, simple or complex, if you use it properly you will get a great result. If it’s a great story, all you need is that sock, a couple of eyeballs and a great voiceover.
Humans are great at filling in the gaps. We listen to stories on the radio, and ultimately we are using our imagination no matter how much information we’re given. We piece together the bits that are missing.
On “Black Mirror” it’s all about a moment in a human’s life, or the emotion that characters go through. Essentially, human nature meets machine, meets technology. We don’t need to compete with big budget movies to achieve that. Get the story right, capture the look of the ‘world’ in which the characters live, which in this show is often a close match to the world we all know, and you are good to go.
Concept artwork and graphics for “Nosedive” episode of season 3, courtesy of Painting Practice.
Concept artwork for “Nosedive” episode of season 3, courtesy of Painting Practice.
Continue reading »
February 21st, 2017
Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome Mahlon Todd Williams. In the last thirty years Mahlon has worked on a wide variety of feature and episodic productions, most recently on CW’s “Legends of Tomorrow”. In this interview we talk about his background and how he started in the camera department, the industry transition to digital, his work on music videos, the compact schedule of episodic television, and what goes into creating the visual worlds of this show.
Kirill: Please tell us about yourself and how you got into the industry.
Mahlon: I go by my full name, Mahlon Todd Williams. I’m a cinematographer, and I’ve been working in the camera department since the late ’80s.
After finishing film school in Montreal, I came back to Vancouver and got into the union, into what was called the camera trainee program. During two years there they teach you how to be a 2nd assistant camera or the clapper loader. I did that for about 10 years, hoping to work with people who were already established cinematographers. I thought that was the best way, to work with them and see what they do, and what their process is. It’s nice to see the final product, but there’s always that mystery about you actually get to that final product. A lot of people start with the same elements, but they end up with a different movie and look. How do you get there? How are you able to control the elements that you are able to control?
A friend of my dad was a designer on set. I went to him a couple of times to get some advice about getting into the industry. He didn’t know anything about the camera department, but his main piece of advice was that whatever it is that I want to do, find the people that are at the top of their game in the industry, and figure out the way to train with them. That’s where you learn your art and craft. And that’s basically what I did.
I was able to get in. I worked on some feature films, TV shows, and commercials. I started to segway into doing commercials because my ambition from the very beginning was always to become a cinematographer. When I was working as an assistant, I kept on shooting stuff on the side, and using things that I’ve learned from other people on my own projects. Commercials take anywhere between a day and two weeks if it’s a big one. You can disappear for a week or two between working on them, shooting a short film or an indie feature or a music video. I kept on bouncing between them, and it was a great way for me to continue building my resume and reel.
Eventually it got to the point where I had enough credits, and I started getting phone calls for jobs. After about ten years as a camera assistant, I stepped away and fully started working as a cinematographer, in the union at least. That’s what I’ve been doing since around 2006. But I’ve been shooting stuff since the late ’80s.
Kirill: Would you say that these days there are more opportunities to get into the field? There are so many indie features and shorts, and there’s such a variety of episodic television compared to when you’ve started.
Mahlon: Yes and no. I think there’s a lot more content that’s being generated. And camera-wise, there’s definitely a lot more choices. You can shoot with your iPhone, with a DSLR, with a Red or an Alexa, or a combo. When I got into the field as a clapper loader, it was all film. I learned how to load film and how to thread cameras up; that was my training. If you wanted to work on a feature or on a TV show, there was a fair amount of work in Vancouver at that time. There was a bit of a boom in the ’90s when I rolled back into the town.
There wasn’t a tier system back then. These days some shows are shot on DSLRs. We’re shooting “Legends of Tomorrow” on Alexa. In the last four years I’ve shot a bunch of TV movies and music videos on the Reds.
Going back to your question, there is a fair amount more. When I got back into the town, one of the first jobs that I got was with a company that shot karaoke videos. At the time, in the mid-90s, they were not shooting on film. We were shooting on a Beta SP camera, and that’s where I really started to learn how to light and shoot and control the elements for video. I was training on film on bigger shows, and back at that time it was a bit funny.
People that I’ve worked with on those shows thought that I was crazy for working on stuff that was shot on video. That was the beginning of HD, and it was coming. A lot of people in the industry though that it was never going to take over, and that film was here to stay, and that there was no use to even attempt to figure out how to do that. For me, it was a great break to actually light and shoot something in general.
It was a format that was a lot harder to make look pretty, compared to Red or Alexa right now. It wasn’t very forgiving. There wasn’t 24p sort of gloss to those camera. You had to work really hard to control daylight. If you shot in the studio or at night, it was a lot easier to make it look good. But as soon as you had to go outside during the day, you really had to figure out the elements that you wanted to put in the shot, the lenses, the diffusion, the smoke – every trick in the book to try and make that format feel closer to film than it actually was.
Kirill: Where you surprised how quickly digital took over, and how quickly film folded, at least at the scale of the cinema history in general?
Mahlon: Yes and no. If you look at it from the financial perspective as a producer, it made total sense that it would go that way. Some of the elements of making a show are actually still easier to do on film, in my view. You don’t have a data management cart. You can run around with a film camera, and as long as you have some place to change mags, you don’t a lot of elements. You don’t need a lot of power apart from the battery on the film camera. There’s a little bit more machine for digital that you have to move around.
There are other things. You can play back the shots on digital, and if you’re not 100% sure that you got a shot, you play it back and check the focus. You check if there’s a boom in the shot – and those elements are fantastic. There’s definitely something fantastic to it. It’s easier to walk away at the end of the day knowing that we’ve got the shot and that it’s usable.
On film you may be pretty sure that you got it. The operator and the director are looking at the performance, and they are all happy with that. But until it comes processed from the lab, you don’t know if it has a scratch on one of the frames, or if something has happened during the processing of it. There’s a million things that could have happened, and you won’t find out for a day or two.
On “Legends of Tomorrow” we actually shot one of the episodes with a Super 8. It was a lot of fun, and it’s been a while since I’ve done that. We’re shooting in Vancouver, so we bought the film in LA, waited for it to be shipped, shot it, then shipped it back to LA, and then had it processed. It took us three days to see the dailies of what we had shot. Normally, if we shoot until midnight, by eight in the morning we’re getting a still frame from the lab already showing us the shot that we’ve got. And by around ten in the morning, you’re already looking at the dailies from the previous night.
When you’re working with film, you have to be 100% sure before you start shooting that everything is ready. There’s a whole process to get ready to do a shot. All the departments know that you have one or two takes on film, while on digital if you really need to, you can keep the cameras rolling a bit more. And that happens to be the case now. You can roll into a take that is ten minutes long, but if you did that on film, that’s a thousand feet of film. That’s a lot of money to buy it, and then process and transfer.
Kirill: What about the visual quality of digital? Has it caught up to film?
Mahlon: I’d say for the most part it has. There are certain lighting situations, like daytime exteriors that depend on the lens and how wide it is. I still find that film has a slightly different feel to it in those situations.
We had shot the Super 8, and it took three days to get it back. We bought it because we really wanted it to have this film look. And when you see Alexa footage beside the film footage on small screens, the color and the contrast are very close. It was really hard to tell which was which. We shot some stuff at 200 ASA and we shot some stuff at 500 ASA. That’s where you really saw the difference in grain between Alexa and Super 8.
If it was on a bigger screen, and you were looking at an inter-cut, I think you would be able to tell the difference. But a quick look at it showed a pretty close match. It was closer than we thought it would be. We didn’t actually have the time to do a test. We shot it on the day, and when it came back, we looked at it. There are some tests on the website of the company that sold us the film, and we based our estimates of how much grain we would be getting on those tests. I was surprised that it was so close between the two, and we had to push the film even a bit more to give it more of a jump between that and the Alexa.
It was interesting to see how strong Alexa is, and how far digital has come that it does give you that feel.
Courtesy of CW’s “Legends of Tomorrow”.
Continue reading »
January 25th, 2017
Continuing the ongoing series of interviews with creative artists working on various aspects of movie and TV productions, it is my pleasure to welcome Jesika Farkas. In this interview we talk about the beginning of her career in the field of architecture and interior design, the transition into the world of episodic television and feature films, building spaces for actors and collaborating with directors and cinematographers, and the challenges of working on small indie productions. In the middle of it all, Jesika tells the story of “Dixieland” – a movie that follows two characters in the underbelly of Mississippi.
Kirill: Please tell us about yourself and how you got into the industry.
Jesika: My background is in architecture and interior design, but I’ve always loved film. I was working for an architecture firm that sort of imploded, and I had a bit of time on my hands. My brother was in industry doing storyboarding, and he was approached to design a short. He wasn’t able to do it at the time, and he put them in touch with me. That was in 2010.
I always loved the concept of visual storytelling, and I accepted the gig. It was “Mother’s House”, the first production I designed. It was a nine-day shoot with some really lovely people, Ingrid Price and Davis Hall who wrote, produced and directed it. And from there it catapulted into the field, and since then I’ve been doing feature films, TV and shorts.
Kirill: Was there anything surprising when you joined this industry that was new for you?
Jesika: This actually happens quite a bit. I live in New York and I split my time between upstate New York and New York City. There’s a firm called “Roman and Williams”, and that couple actually started off in the film industry. They did a bunch of films with Ben Stiller, and wound up designing his house. From there they made a switch to the other direction and now they have a very established and reputable architecture / interior design firm.
And sometimes it happens in the reverse direction. It’s the visual medium that you’re playing with. There are spaces, and you’re conceiving of them. It can be for actors to come through, and they’re constructed in a way that lasts for a temporary moment. Or it can be for more permanent enjoyment. But they have a similar goal that you’re going after.
It was a very easy transition for me to take up film and create the environments that help to tell the story. In architecture you’re designing things that last a lot longer, but they might not have as much playfulness or creativity going into them. I actually still do a little bit of that as well, straddling the two worlds of reality and film.
Concept artwork for “Dixieland”, courtesy of Jesika Farkas.
Kirill: If you asked me twenty years ago how movies are made, I’d tell you that somebody puts a camera on the shoulder, points it to the actors that happen to be in some existing space, somebody shouts “Action!” and that’s it. But that’s not as simple. Everything needs to be designed, to create environments that tell the story visually.
Jesika: Absolutely. It’s very much a collaboration between the production designer, the director and the director of photography to conceive of a space. It always starts with a palette and feeling of the film, and being able to visually convey what is needed for the story. You work with that. But it does seem like it could be quite easy.
But ultimately if you really want to convey something fluid and you want to support the story, it needs to be designed. And you don’t even notice that. Sometimes it can be very much in your face, but it can also be very subtle. There’s needed intervention in the design process, for sure.
Kirill: What is your experience with episodic television so far?
Jesika: A lot of it is done on a soundstage. You wind up building the primary set, something that you come back to time and again. And you fill in with locations to give everyone a break and be able to diversify the visual element of the whole series. But the anchor is the staged set, and it can be quite different from independent films, where staged sets are an afterthought. There you’re really out in the field shooting and making locations work.
For episodic television it’s often a very planned larger set that you curate and build, and it’s a basecamp that you come back to time and again. That becomes the anchor of a series.
Kirill: Is that a production consideration that you have however many episodes in the season, and you can’t close down a particular location for an extended period of time?
Jesika: That’s right. Your set is there on the sound stage, and it might be sitting there with nothing happening when you’re out in the field on the various locations.
Kirill: Do you have a preference between working on location and on stage sets?
Jesika: If you go back to earlier film days, everything was on a stage and everything was controlled. You think of old Hollywood, you think about giant cavernous place where you could create SPRAWLING sets and everything was at your disposal. But of course it’s a very costly thing. This is where finding locations helps.
You wind up working with those spaces, and transforming them sometimes or just leaving them as is and going with the feeling. That depends on being able to find a location that suits your need. I think there’s an advantage to shooting on a stage for sure. You can create everything exactly as you want it to be. And there’s also an advantage to being on location and finding things that are already existing and beautiful. You have the ability to invite natural daylight into spaces and just augment that. You see the passage of time in those spaces, and find things that are sometimes difficult to recreate.
Continue reading »
January 17th, 2017
Starting from OS X El Capitan (10.11), there’s a new default system font in town – San Francisco. And it came with a very big underlying change, as detailed by Craig Hockenberry:
Apple has started abstracting the idea of a system font: it doesn’t have a publicly exposed name. They’ve also stated that any private names are subject to change. These private names all begin with a period: the Ultralight face for San Francisco is named “.SFNSDisplay-Ultralight”. To determine this name, you need to dig around in the font instances returned by NSFont or UIFont; it’s not easy by design.
The motivation for this abstraction is so the operating system can make better choices on which face to use at a given weight. Apple is also working on font features, such as selectable “6” and “9” glyphs or non-monospaced numbers. It’s my guess that they’d like to bring these features to the web, as well.
Even though the underlying .otf files are still in /System/Library/Fonts, San Francisco is no longer exposed via the regular APIs that web and desktop developers have grown used to. Specifically for Swing developers (of which there may not be many, so at some point it will kind of take care of itself), passing “San Francisco” to the Font constructor ends up using the previous default – Lucida Grande.
JavaFX is already doing the right thing, using San Francisco as the default UI font on El Capitan and Sierra. Swing’s legacy is to have each look-and-feel decide which font to use, and I was expecting the “System” look-and-feel which maps to Aqua to be using the right font family on the latest OS releases. That is not the case as I’m writing this entry, and Swing apps on both El Capitan and Sierra are still using Lucida Grande on both 8u112 and 9-ea.
Last week Phil Race pointed me to this issue that tracked syncing up the internal implementation details of glyph mapping between JavaFX and AWT. That issue has been fixed in early access builds of JDK 9, and is slated to be available in JDK 8 u152 scheduled for October 2017. At the present moment there is no public API to get either a name or a font instance itself that will be mapped to Lucida Grande on 10.10 and earlier, and to San Francisco on 10.11 and 10.12. The only available solution is quite brittle as it depends on the internal naming conventions exposed by the underlying OS:
- .Helvetica Neue DeskInterface on El Capitan (10.11)
- .SF NS Text on Sierra (10.12)
Note that you need a leading dot in both cases, and that this only works on early access builds of JDK 9 at the moment:
In this screenshot the second button is using new Font(“.SF NS Text”, Font.PLAIN, 24) while the rest are rendered with Lucida Grande. The most noticeable differences are in the curvy strokes of “e”, “g”, “5” and “9”, as well as the straight leg of “a”.
Ideally, there’d be an officially supported way to use the right font on OS X / macOS, either in a form on some kind of a static Font API or a synthetic font family that maps to the underlying system font on all supported platforms. Phil has filed a bug to track the progress on that front.