December 26th, 2013

In the world of unlikely NFL playoff scenarios

Random musings as NFL gets into the last week of the regular season and both Saints and Cardinals are at 10-5, with only one of them going to the playoffs. Since they’re not playing each other, we may see an 11-5 team not going to the playoffs this year. And only three years ago Seahawks won their division and went to playoffs at 7-9.

There’s some math laid out in here [PDF] that looks at the schedule balance of an NFL season, and while on average it appears that in most cases we do see the best teams from each conference going to the playoffs, I always wondered what is the worst case scenario.

There are two extremes here. The first one is how bad can you be and still get to the playoffs. The second one is how good can you be and still watch the post season on the TV.

The first one is simple. Each conference has four divisions, and every division is guaranteed a spot in the playoffs (aka Seahawks ’10). If you’re not familiar with how the regular season schedule is determined, here’s how it works. For the purposes of this first extreme, it’s enough to know that each team plays the three teams in its division twice (home / road), and then 10 games elsewhere in the league. What’s the absolute worst? Well, you can get into the playoffs with zero wins. That’s right, zero. How? Imagine a division with four really bad teams, and every game in that division ending at 0:0 (or any tie). And then every team in that division loses the rest of their 10 non-division games. In that case you’d eventually get to a very awkward coin-toss to determine which one of these four teams gets the “first place” in the division. Unlikely? Extremely. Possible? Of course.

Now to the other extreme. How many games can you win and still miss the playoffs? The answer is 14 (out of 16 games you’re playing) – if my math is correct of course. Let’s look at the scheduling rules more closely.

When the league expanded to 32 teams, it brought a very nice balance to the divisions themselves and to the schedule. Two conferences, four divisions each, four teams each. All hail the powers of 2! By the way, there’s additional symmetry that you get to play / host / visit each other team every 3/4/6/8 years (depending on the division association).

Back to who gets to the playoffs. Every division sends its first place, with two more spots (wildcards) left to the two best teams in the conference after division leaders are “removed” from the equation. This means you can be a really good team and still not get into those six. The conditions are quite simple really (as one of Saints / Cardinals will see this Sunday). The first one is that you have an even better team in your division that takes first place. The second one is that you have two better teams elsewhere in the conference (such as 49ers that already secured the first wildcard spot for NFC).

Let’s look at the numbers now. How can we get to the 14-2 record and still miss the playoffs?

In the following scenario we have NFC East as NE, NFC West as NW, NFC North as NN, NFC South as NS, and their counterparts in AFC as AE, AW, AS and AN. Let’s choose three random divisions in NFC, say NE, NN and NS.

A team in NE is playing 6 games in NE, 4 in NN, 4 in AW and 2 in NS/NW. A team in NN is playing 6 games in NN, 4 in NE, 4 in AN and 2 in NS/NW. A team in NS is playing 6 games in NS, 4 games in NW, 4 games in AE and 2 games in NN/NE.

In general you meet all teams in your division twice, all teams in another division in your conference once, all teams in a division in the other conference once, and then two teams from the other two divisions in your conference that finished at the same place as you last year.

What we’re trying to do is to get as many wins as possible for the #2 team in each one of our divisions (NE, NN and NS). There are only two wildcards available in each conference, and we don’t care what happens in NW or the entire AFC.

For each pair of teams in NE, NN and NS we want to maximize the number of wins while still keeping in mind that they play each other. This year each team from NE plays each team in NN twice. And each team in NS plays one team in NN and one team in NE – based on its position in the division last year.

Let’s look at NS first. Team #1 and #2 get four wins each playing #3 and #4 in their division. Then they split the wins in their two games, getting at 5-1 record for each. They then win all 4 games against NW, getting at 9-1, and all 4 games against AE, getting at 13-1. Finally, assuming that our two teams finished last year at positions that get them scheduled against NE / NN teams that will not be finishing at #1 / #2 teams this year, both teams get at 15-1 – all without taking a single win away from the four teams in NE / NN that we’ll be looking at shortly.

Now to NE / NN. We’ll look at NE, while the same logic applies to NN. Once again, teams #1 / #2 win all four games against #3 / #4 and split their own two matches, getting at 5-1 both. Now they play four games against NN. They win both games against #3 / #4 teams, getting at 7-1 each, and split the wins against #1 / #2. We need to split the wins in order not to take away “too many” wins from those two teams in NN. So we end up with 8-2. Now they win all four games against AW, getting at 12-2. Then they get one win against NW getting to 13-2. Finally, they have one game to play against NS. Applying the same selection logic, the best scenario for us is to get them scheduled against a team that is not at #1 / #2 this year (but at the same position they were last year), which gets both teams to 14-2.

And now the same goes to the first two teams in NN, getting them to 14-2 both. Which is why we need to split the NE/NN games between #1/#2 teams.

Now we have #2 team in NS at 15-1 and #2 teams in both NE and NN at 14-2 each. One of them will have to stay out of playoffs. Unlikely? Extremely. Possible? Of course.

Waving hands in the air, it feels that the first scenario is much less likely to happen due to how few ties we usually see in the league. Even though it can happen in any one of the eight divisions, and the second scenario involves three divisions in the same conference, it’s still much less likely to happen. What if we remove the ties from the equation?

A 4-team division has every team playing every other team twice. All in all you have 12 inner-division games in every division. If no game ends in a tie, the most extreme case is that all teams end up winning and losing 3 games in their division, and losing all other 10 games, with 3-13 record for each. One of them will go to the playoffs. That would also answer the question of how many games can you lose and still go to the playoffs. In the previous scenario (no wins), you have every team in the division at 0-10-6 record, so it’s “only” 10 losses. With this scenario you have a 13-loss team going to the playoffs.

It would appear that this new extreme is more likely to happen, as it only involves teams in a single division, and the other one (14-2 not going to playoffs) involves teams in three divisions.

Now two questions remain. Can we get to a 15-1 record and stay out of playoffs? And, more importantly, is there a fatal flaw in the logic outlined above?

December 26th, 2013

Metric-driven design

One of my recent resolutions (not for 2014, but for mobile software in general in the last few months) was to evaluate designs of new apps and redesigns of existing apps from the position of trust. Trust in the designers and developers of those apps that they have good reasons to do what they do, even if it’s only one or two steps on the longer road towards their intended interaction model. But Twitter’s recent redesign of their main stream keeps on baffling me.Putting apart the (somewhat business-driven from what I gather) decision of “hiding” the mentions and DMs behind the action bar icons and adding the rather useless discover / activity tabs, I’m looking at the interaction model in the main (home) stream.

Twitter never got to the point of syncing the stream position across multiple devices signed into the same account. There is at least one third-party solution to do that, which requires third-party apps to use their SDK and users to use those apps. The developer of that third-party solution has repeatedly stated that in his opinion Twitter is not interested in discouraging casual users that check their streams every so often. If you start following random (mostly PR-maintained) celebrity streams, it’s easy to get lost in the Twitter sea, and when you check in every once in a while and see hundreds – if not thousands – of unread tweets, you might start feeling that you’re not keeping up.

As I’ve reached the aforementioned decision a few months ago, I’ve uninstalled all third-party Twitter apps I had on my phone, and switched to the official app. It does a rather decent job of remembering the stream position, as long as – from what I could see – I check the stream at least once every 24 hours. When I skip a day, the stream jumps to the top. It also seems to do that if the app refreshes the stream after I rotate the device, so some of this skipping can be attributed to bugs. But in general if I’m checking in twice a day and am careful not to rotate the device, the app remembers the last position as it loads the tweets above it.

In the last release Twitter repositioned the chrome around the main stream, adding discover / activity tabs above it and the “what’s happening” box below. While they encourage you to explore things beyond your main stream, it also looks like they’re aware that these two elements take valuable vertical space during the scrolling. And the current solution is to hide these two bars when you scroll down the stream.

And here’s what baffles me the most. On one hand, the app remembers the stream position, which means that I need to move the content down to get to the newer tweets (as an aside, with “natural” scrolling I’m not even sure if this is scrolling up or scrolling down”). On the other hand, the app hides the top tabs / bottom row when I move the content up.

Is the main interaction mode with this stream is getting to the last unread tweet and then going up the stream to skim the unread tweets, as hinted by remembering the stream position? Or is it getting bumped to the top of the stream and scrolling a few tweets down just to sample the latest few entries in it, as hinted by hiding the two chrome rows and providing more space during the scrolling?

I don’t want to say what the app should or shouldn’t do in the stream (as pointed out by M.Saleh Esmaeili). It’s just that I can’t get what the designers intend the experience to be.

A few days ago The Verge has posted an article around metric-driven design in Twitter mobile apps, an for me personally the saddest part of this article is how much they focus on engagement metrics and how little the guy talks about informed design. Trying to squeeze every possible iota of “interaction” out of every single element on the screen – on its own – without talking about the bigger picture of what Twitter wants to be as a designed system. Experiments are fine, of course. But jacking up random numbers on your “engagement” spreadsheets and having those dictate your roadmap (if one can even exist in such a world) is kind of sad.

When every interaction is judged by how much it maximizes whatever particular internal metric your team is tracking, you may find yourself dead-set on locating the local maximum of an increasingly fractured experience, with no coherent high-level design, and no clear path that you’re taking to arrive to the next level. Or, as Billie Kent’s character in Boardwalk Empire says, “always on the move, going nowhere fast.”

December 6th, 2013

Production design of “The Wolverine” – interview with François Audouy

Make a list of your top favorite tentpole productions in the past 15 years, and you can count on having François Audouy be part of at least one of them. He started his career as an illustrator and concept artist on movies such as Men In Black, Pearl Harbor, Spider-Man, Minority Report and Avatar, shifted to the art director position on Transformers, Watchmen and Charlie and the Chocolate Factory, and then moved to be the production designer or Abraham Lincoln: Vampire Hunter and the recently released The Wolverine. In this interview François talks about his work on The Wolverine that brought him back to his days of reading comics books growing up, researching the history, art and architecture of Japan, designing and building the main sets for the movie, and collaborating with visual effects departments on big-budget sci-fi productions.



François Audouy

Kirill: Please tell us about what you’ve been doing lately.

François: I was the production designer on Wolverine, which was an incredibly exciting and rewarding project. It took seventeen months to complete from start to finish. And I just finished another movie, Dracula Untold, and I’m very excited about Wolverine coming out on DVD.

Kirill: How far did you get into the X-Men universe preparing for the movie? Did you treat this movie as a standalone production not necessarily connected to the rest of the franchise?

François: When I first heard about the project, the only thing I knew about it was that it was set in Japan. And to be honest, that was the thing I was the most excited about. It was a dream of mine to design a movie set in Japan. Every movie is an opportunity for a designer to become an expert in something. So I really thought it was exciting to learn more about Japanese culture and architecture. You’re always looking for an opportunity to learn something.

Having said that, I was also really aware of Wolverine because I was born in 1970s, and I’m pretty much the same age as Wolverine. I remember the comic books from the late 1980s, which, looking back, is probably the golden age of Wolverine. My feeling was that the movies featuring Wolverine hadn’t really tapped into a lot of what I loved about those comics, and a lot of detail with the Logan character who’s so interesting. And I read the script, I thought that it was a great story where we really get a chance to get to know Wolverine a little bit better, and we get to focus on him for an entire movie without the distractions of all the tertiary characters. That was very exciting.


Yashida cottage. Concept illustration over location photography. Courtesy of François Audouy.

Kirill: The Japanese culture is rather closed to the outsiders. How did you approach your research phase?

François: It was kind of terrifying in the beginning, honestly. It’s so different, and so deep. There’s so much to learn.

First thing I did was to hire a researcher in Los Angeles to pull images and references. And Jim [Mangold, director] early on decided that he wasn’t interested in making a movie with cliches, like little temples or bamboo forests. I went hunting for settings and places that felt unique and different. One thing that I’m really proud of in the film is that we have this intimate story, but it also takes them through places that are understated, grounded in real, and not so Hollywood-phony [laughs]. I was trying to do something that felt real.

What helped tremendously was that I had the art department in Tokyo, and a group of people who were helping me with the locations. I had a great location manager. I scouted many places in Japan, in the mountains, north of Tokyo, Nagasaki, Hiroshima, Kobe, Kyoto, Osaka. I went there six times, and over the course of the travels every time I learned more about the culture, as I was surrounded by my Japanese crew going to all these interesting places.


Left – Tokyo love hotel, set built on stage. Right – ice village, set built on location. Courtesy of François Audouy.

Kirill: The family compound is one of the central sets in the movie. How much time did you spend on it?

François: That was probably our biggest set, and it was my favorite. It was a very immersive set, a set that you walk into and it feels totally real, even though it was built on a soundstage. Jim was referencing and inspired by “Rear Window” with Jimmy Stewart. It had an apartment looking out into the courtyard, and you can see the world outside and all of the different stories happening. And he wanted the Yashida compound to have the same feel, where you could look and have these views across the central courtyard, and see Mariko’s world, and Yashida’s chambers, and the story dynamic of this complicated family.

I created a set that was pretty much in-camera. We had a big central courtyard with water element, and all of the interiors, and it was very much an in-camera place. And it was a very hyper-modern Japanese aesthetic that was ground and rooted in the ancient flow of Japanese architecture.

And to answer your question, it probably took five or six months to design that.


Yashida compound. Set built on stage. Courtesy of François Audouy.

Kirill: And the other big set for the final sequence in the science lab was done with some digital extensions?

François: It was originally scripted as a cave [laughs]. But I wanted to bring it back to a more Japanese setting. The movie has a little bit of everything – an old cottage, a billionaire’s compound, an ancient Buddhist temple in Tokyo – and I thought it would be really cool to have a modern industrial lab.

This set was pretty big, 42 feet tall. We built two floors of the tower that was supposed to be 30-40 stories high. The idea was to create an action sequence that happened vertically. Normally these sequences are very horizontal, and we wanted to go down and up instead of just horizontal. We kept redressing our two floors as different floors going down, and extending those floors with the digital set extensions.


Yashida lab. Set built on stage. Courtesy of François Audouy.

Kirill: You’ve worked on quite a few other VFX-heavy productions. How is the balance of responsibilities between you as the production designer and the visual effects supervisor working for you? Are you losing some of the control over the final look of the digitally augmented scenes?

François: You’re right, as a lot of these films are becoming more synthetic, relying on digital set extensions and digital building out of environments. The studios and the directors realize that too, which is why we bring in the visual effects supervisors quite early in pre-production, so that they can be involved in what we’re doing. I try to keep a very close collaboration with VFX supervisors, and I also try to make sure that I design the digital sets – or sets with digital extensions – in the same way that I’m designing a set I’m building. I don’t really see a distinction whether it’s going to be digital or physical. It doesn’t matter to the audience. They don’t know and they don’t care what’s digital or what’s physical. I really treat that job in the same way.

I work hard to have everything designed and figured out before I leave the production. We hand over all the assets to visual effects for the assembly in the same way that I would hand over designs to a construction crew. They would get a full set of construction drawings, paint references and color ways, with everything figured out before you go and build the set.

Kirill: Although the difference is that for physical construction you’re still on the project, but for digital in post-production you are, for the most part, gone.

François: That’s true, and that’s why it’s important to have a close relationship with the visual effect supervisor which will be overseeing the final construction of the digital assets.

One thing that was great about Wolverine was that Jim had me come by the editing suite at Fox every two weeks over the course of six months. He showed me new things every two weeks, and it was a really great opportunity. He pulled me in, valued my opinion and kept me as a part of the team.

Kirill: And the last question is about 3D productions. How is it working out for you. Is it here to stay, perhaps confined to the tentpole sci-fi productions, or do you see it fading away?

François: I think stereo’s here to stay. I like it, but I don’t like it for all films [laughs]. It can be a great added experience to certain films, and kind of a distraction to others. It’s here to stay, but I don’t think we’ll be doing all films in stereo.


Yashida compound. Set built on stage. Courtesy of François Audouy.


And here I’d like to thank François Audouy for taking the time out of his schedule to answer a few questions I had about his work on The Wolverine and about his craft in general. Special thanks to Mitzye Ramos at Think Jam for putting me in touch with François. The movie is available on DVD, Blu-Ray and in your favorite digital distribution channels.

December 5th, 2013

The craft of screen graphics and movie user interfaces – interview with Paul Beaudry

Continuing a series of interviews with designers and artists that bring user interfaces and graphics to the big screens, it’s my pleasure to host Paul Beaudry. You have seen his work on “Avatar”, “The Hunger Games” and “Ender’s Game”, and in this interview Paul talks about what goes into designing screen graphics, drawing inspiration from the latest explorations in real-world software and hardware, holographic and 3D displays as a possible evolution of human-computer interaction in the next few decades, challenges in using technologies such as Google Glass or Siri in film, the ongoing push to create more detailed and elaborate sequences, and his thoughts on working remotely with the current crop of collaboration tools.


Kirill: Tell us about how you started in the field of motion graphics.

Paul: I started out wanting to be an AVID editor, editing documentaries and similar productions. As soon as I finished school and got into the industry, I found out that what I liked the most was coming up with graphics for documentaries and shows that I ended up working on. From there I started teaching myself motion graphics, moving into opening title sequences and getting some cool opportunities.

There are really good communities online for learning. At the time for me it was talking with other people at the great mograph.net site, talking about how to get into the industry, the challenges and technical issues. That’s how I got my start. The software itself is not crazy, and a lot of people learn how to use, for example, Photoshop even though they’re not professional graphic designers or photographers. They way I look at After Effects and other 3D tools that we use is that they are more complex than Photoshop, but not so much so that it’s not impossible to learn on your own. It was years going crazy, huddled over my computer, teaching myself in every bit of free time I had during late nights, not having much of an outside life for sure [laughs].

Kirill: That’s on the technical side of things. What about the design side?

Paul: I hope I’m still learning as I go. It was a lot of the same, learning design and technical stuff together hand-in-hand. I think it’s important, actually. A lot of the conversations we had online was about getting critiques of your work, moving forward in design and technical side at the same time.

Kirill: How did you start building out your portfolio?

Paul: A whole bunch of spec pieces. My interest at the start was not really in UI design for film. At the time not that many people even knew that could be a full-time job. I was more on the television side of things, doing commercials, title sequences, more traditional motion graphics. And it was also doing my own stuff, building up reputation to get real projects.

Kirill: What were those first real projects?

Paul: I was working with the company Frantic Films on a half-hour documentary show for the Discovery channel. I don’t think it ran in US; it was a Canadian thing. I got a chance to do the opening title sequence for them, expressing my interest in doing that, and they gave me my first shot. From there I started doing a lot more work for them, and some stuff for HBO and A&E a few years later, and as a freelancer I kind of branched from there.

Kirill: What’s the story of the iOS music app Anthm that you have in the portfolio section on your site?

Paul: Anthm came out in February 2012, and actually the name is now Jukio because we ran into a bit of a legal issue. That’s something I did with my friends in our free time. It’s me, Tyler Johnston who is a graphic designer, and Ben Myers who I worked with on Avatar. We were having drinks at a bar, and we were annoyed at the music they were playing. So we came up with an app for iOS that lets you request and vote on the music playing in your location from your phone, like a jukebox with millions of songs.

Kirill: Perhaps jumping a bit forward, your work for movie UIs is the tip of the iceberg above the surface, with playback loops or basic interactivity that mostly focuses on the presentation layer. And on the other hand, creating a real application that people run on their devices forces you into the full design and implementation cycle, complete with crashes, bug fixes, feature requests etc.

Paul: My first passion is to create fantasy user interfaces for film, but at a certain point you want to make something that’s real, something that a real user can use. Something that doesn’t only look like magic, but hopefully feels magical to use. Not that Jukio is earth-changing or anything, it’s simply a music app, but there are small UX choices there that feel magical to us and that’s not always something you can do in film. It’s definitely something that we’re really interested in – getting real feedback from people, making something real that can be used to solve real-world problems.

I should mention that none films I can talk about right now had real software in them, everything that’s been released was done in post, but the company I’m working with now, G Creative Productions, has the ability to create real software that’s used on set by the actors while they’re filming. It’s all done using live playback so it’s not a post-production thing at all  – they create real software that the actors can tap, change on the fly and really interact with while they’re filming.

Kirill: And you’re focusing mostly on presentation and interaction part?

Paul: Absolutely. We fake a lot of the underlying pieces. But it’s still more involved than what you do in the post process. There’s a lot more interaction involved, and I think there’s actually a lot more thought that has to go into it. There’s a programming level involved that isn’t there when you do it in post.

Kirill: Is it more challenging to do something that actors interact with on the live set?

Paul: I’m definitely still more comfortable with post production, as I’m still on my first couple films doing playback on sets. There are benefits to both ways of doing it. On the playback side it just looks more realistic. In post production there are different challenges to consider, like the interactive lighting, how the the light from the screen will reflect off of someone’s face, for example. If you do it in post, it becomes a big job to fake the light created by these screens, whereas in playback it’s not really a concern anymore.

In most cases it’s a lot more practical, as the director can actually see the screens while he’s filming on set. On the other hand, in post we can do all kinds of crazy stuff like holograms, the craziest ideas we want and there’s really nothing stopping us from doing it.

Kirill: Well, except for budget and time.

Paul: Time and budget, yeah. We push those pretty hard [laughs]. I think I still prefer doing things in post where we can create this crazy stuff. That’s really where a lot of fun is – envisioning these really far futuristic pieces of technology without concern for what’s technically possible.

Kirill: Avatar is your only released movie so far that did 3D effects. Did that add a lot of complexity to what you did?

Paul: Absolutely. Avatar was the only one that we did that way. It was definitely challenging. It looks great in the end, but it’s not something I’m super-anxious to do again, I’ll say that [laughs].

It’s a lot more interesting to think about something in 3D space. If the user actually has to use it this way, how would you use the layering to enhance user experience, and how would you use the layering in film to help tell the story better. And at the time, getting the technical aspect across was hard because the software tools didn’t have much stereo support. I was at Prime Focus for Avatar, and they had great custom-made tools that would take our After Effects layers and disperse them in 3D space within the scene, really making the compositors’ lives a lot easier.

I worked with Ben Myers and we had basically wrapped up a month and a half early, getting everything into position to be approved by James Cameron, and seeing the end in sight. And we decided that for a lot of hero screens we would rig our own stereo cameras within After Effects, and use those to actually render hundreds of layers of depth rather than just 3-5. That was cool, to come up with that process on our own before it was really a common thing, before the software was geared to allow us to do that easily.

Kirill: Was that for the big holo table?

Paul: I didn’t do the table, but rather all the other 30 or so screens you see in that set. The holo table was fully 3D, and I believe they used 3ds Max which I assume was a bit easier to work in stereo than After Effects had allowed. For us the challenge was to get After Effects to render things in stereo quickly and efficiently, which was a big hassle. We were taking care of those 30 screens across a bunch of shots, so it was a big job to do everything in stereo.

Kirill: Leaving the technical side of things, what about the initial explorations of the overall space? Do you sit down with the director, the production designer or perhaps the VFX supervisor to discuss the general interaction aspects? I’m looking at the list of films you worked on – Avatar, Hunger Games and Ender’s Game – and it’s sufficiently far in the future that you don’t necessarily have to be bound by the limitations of the current technology. Who is involved in defining the interactions?

Paul: I don’t really interface directly with the directors. Usually there’s a layer between us as Gladys Tong is in the case of G Creative right now. There’s a lot of back and forth, pitching the craziest ideas, throwing a lot of stuff out. On Avatar, for example, a lot of things were set out before we joined. James Cameron was on that film for 14 years, I think, and Ben Procter had already done a lot of designs for our screens. It was working under our Art Director Neil Huxley with Ben Myers and others to animate them.

On The Hunger Games we looked at a lot of real-world references, extrapolating them into the future. Where would the surveillance technology be, how it would look in this dystopian future where they can really watch and control everything that is happening within the Games. We looked at Microsoft Photosynth, for example, and we really loved the idea a 3D interface traveling between photos to see something from a different perspective. We used that in some of the screens – if the game keepers have the technology to see and control everything, they surely have some way to view the arena and view everything within it.

Kirill: I loved this idea that you have removed all the intermediary steps for controlling the arena, where the keepers operate on the scaled down digital replica and are able to virtually touch and control every part of the terrain, to manipulate the digital representation and have the physical counterpart immediately “react” to those manipulations. They don’t type, they don’t move a mouse in some kind of intermediate plane.

Paul: That was definitely the most challenging set I worked on. You have 24 people controlling the same computer essentially. They all touch and control the same centered model, and it was a challenge to think about things like what do they need to control, how do they control it, what kind of interactions are necessary when it comes time to throw a fireball at Katniss, for example. What type of stuff they need to do when they knock down a tree? When they unleash the dogs? It’s a fine line as you’re trying to figure out what would the user would want to do in that scenario, and what’s going to tell this story in the most effective way possible, and how we can walk that line. How we can make this look futuristic and feel like a really intelligent all-encompassing model of the arena?

Kirill: Do you still need to stay at least somehow connected to the current technology, to not get too futuristic?

Paul: It’s difficult. Our first focus is on telling the story. We always start from there. Someone needs to throw a fireball at Katniss, and how do we show that? You take real-world references and extrapolate to technology that is more advanced by a certain number of years. What would make that process simpler? What would someone want to see?

For example if we’re working on a medical animation for a film, what would a surgeon want to see if he had limitless technology. What is the ideal way to perform a surgery? Is it wearing something like Google Glass and seeing an augmented reality display in front of you, showing where to make the incision? Maybe the medical animation is using nanobots instead, and the doctor’s interface is used to control them. A lot of this comes from current technology that we see and that we extrapolate into the future. What would someone ideally be using, and can we get that across in our film and still tell our story?

Kirill: You mentioned Photosynth, and I’m sure you have a whole bunch of other references. Are you trying to stay current and read about all the new technological explorations, even if they are not necessarily commercialized? Is it a big endeavor to stay aware of all the new things?

Paul: It’s a big job. I’m interested in these things anyway though, so a lot of my free time is spent looking at new technology. Any time we start a job, we’re looking at references and what kind of crazy stuff is coming out right now – robotics, networking, holograms – really anything. We’re trying to keep afloat of that so we can use it in film and hopefully try to figure out where it’s all headed in the future.

Kirill: I hate bringing up “Minority Report” as I’m sure you’re sick of hearing about it again and again, but it’s a popular example of things going the other way – interactions portrayed in a film that find their way into real-world products. Is this be a two-way street, a sort of a feedback loop between fantasy UIs and cutting-edge actual products?

Paul: Absolutely. “Minority Report” is funny, I don’t think I’ve ever been on a project where it wasn’t referenced by somebody. They did such a good job that it’s still relevant, and I’m sure had a lot to do with things like Kinect. There is that feedback loop.

You look back at old “Star Trek” episodes and how that’s now inspiring people to do medical apps on smartphones. I would love if what we’re doing will inspire people to make something in real life the same way real life is inspiring us to put these things in movies. I saw a Twitter exchange between Elon Musk and John Favreau about Elon trying to get his team to make some of the Iron Man interfaces for real, with holographic displays. And I think that’s powerful. You need people dreaming up the crazy stuff that’s not limited by what’s possible now, and movies are a great outlet for that.

Kirill: You mentioned Google Glass which is, for me, an interesting piece in the sense that it is a very personal gadget. It’s this screen that nobody but the person who wears it actually sees, which makes it hard to use in a movie, as you’d need to switch constantly back and forth between that small screen and what happens around the person wearing that screen. And that’s not necessarily the best story-telling experience.

Paul: That’s actually one of the challenges we’re having now. We’re working on a concept and we unfortunately can’t have something like Google Glass, as the viewer isn’t able to see it easily without switching to a first-person perspective. You see the trend in films for the past few years to use transparent displays, which is maybe not the most useful thing in the real world outside of goggles or windshields – I don’t really want to see through my laptop right now to the wall behind it. But in a movie it helps a lot where you can have a reverse shot of an actor’s face looking through his display. It’s a wonderful device in a film.

And things like Google Glass, as much as we want to put them in film, that’s exactly the challenge. It’s difficult because you can’t really see it all that easily. You have to walk the line, throwing some of your favorite ideas by the wayside because it is a film and we have a story to tell first and foremost.

Kirill: Are you afraid of people watching your films in 25 years and seeing how off-mark they were, or is each one just the product of its time?

Paul: I’m not afraid of it. You look back at some of the great stuff – like “Blade Runner”, for example – and it may look dated now, but at the time it was incredible. I hope my stuff looks really dated in 10 or 20 years, because that means technology has become so advanced that we’ll have better technology in real life than we ever dreamed of in movies, and I’d love to have real technology that actually works like this.

I think anytime you’re predicting what the future will hold, it’s inevitable that some of the stuff will be fodder for jokes. In 10 years we will look back and think “What was that, that’s so ridiculous that they thought in the future we’d be using interfaces like that.” It’s not something that necessarily “scares” me though, I hope that happens.

Kirill: It’s my impression watching sci-fi movies in the last few years – like Avatar, Prometheus or Avengers – that the trend is to use screens everywhere, putting glass surfaces of all sizes and shapes all over the place. That may not necessarily be the way of the future if you look at something like Google Now, Siri or Glass where the visible surface of the technology is receding and shrinking into the background – instead of expanding around us as portrayed in these movies.

Paul: Absolutely. That’s the trend now in real life – smaller screens and more info that is intelligently designed to fit into smaller spaces. In a movie that doesn’t work so well. It’s a double-edged sword. We can’t always put what we think it’s going to look like in 30 years, say. In 30 years I don’t think I’m going to have these giant monitors sitting on my desk anymore.

Holograms are fun in that way. They can get out of your way. They feel a lot more amorphous, taking whatever shape you want. That’s an interesting thing to think about. These little gadgets don’t always translate to film very well, so we’ll always have these giant operation centers [laughs] with huge screens everywhere, whether or not it’s the best user experience.

Kirill: You say that you don’t see your desk in 30 years’ time having these gigantic monitors. How would you like it to look? What kinds of obstacles do you wish to see removed in your interactions with computers in our lifetime?

Paul: It depends on how far into the future we’re looking. Direct brain-to-computer interfaces are absolutely the holy grail, but that’s obviously a long way out. And that’s not something that is going to work well in movies at all.

If you ask about our lifetime, it’s a tough question. The idea of 3D displays is interesting. Right now we’re using stereo displays for entertainment purposes, but I’m really interested in trying them out while I’m working. What kind of interfaces can we design and what kind of obstacles can we overcome if we think and operate in 3D – instead of the 2D aspect that we’re locked to right now? If we can break the 2D plane of our screens, what kind of interactions and experiences can we create?

It’s possible now, but you don’t see many tools using it. I’m looking at After Effects right now, and I see how having depth to not just the viewport, but the interface itself might be able to communicate some ideas better, make the interface more efficient and powerful to use. Perhaps some subtle depth cues could make an interface feel more natural to use.

Kirill: You’re dreaming up these futuristic interfaces and showing us how the interaction feels, and yet you’re still bound to the capabilities and limitations of the current crop of software tools. Is that frustrating in a sense?

Paul: It’s definitely frustrating from the human perspective. I look at a 3D model on my computer, but I can’t actually reach out and touch it, I can’t manipulate it with my hands. I have to use a mouse or a pen tablet and a keyboard in order to manipulate these things and work with them. The idea of a Kinect-like sensor that I can use – and I hate to bring up “Minority Report” again [laughs] – that’s exactly what I’d like to do. It doesn’t need to be an Iron Man style hologram. It can be in front of a flat screen, or augmented reality like Google Glass, for example.

Being able to move my hands forward and manipulate these models that I’m working with, to animate the camera with my hands – as opposed to a mouse and a keyboard. It’s a lot more natural and opens up a lot more room for creativity, if we’re using it in a human-like way with our hands and our thumbs instead of the makeshift interfaces of the mouse and the keyboard. Those are really there because the tactile interfaces were not available. There’s a lot of power there if we can work that way and remove some of the tools coming between our brains and the software we use.

Kirill: Waving your hands around all day long to complete tasks might not be the most ergonomic way…

Paul: To be clear I don’t ever want to be waving my whole arm “Minority Report”-style. That would get very tiring, but the idea of being able to use gestures in 3D space with your hands – and I mean very minimal stuff, not necessarily waving your whole arm around although that does work well in film – I mean letting go of the mouse and the keyboard, and working with your hands like you would if you were molding clay or sitting at a work bench. Not just sliding fingers across a flat piece of glass, but letting go of the glass and using the movements and gestures your hands have evolved to perform naturally.

Kirill: Getting back to your released movies (four so far), has it become easier to do the interfaces on the technical side of things, or do you see the matching rise in demand for details from directors or VFX supervisors? Do those demands scale with the capabilities of the tools you have at your disposal?

Paul: Absolutely, like anything else in visual effects. It scales with the toolset. When we were on “Avatar”, it was a big technical challenge to just account for how much time it took to render something. And that’s not always a concern anymore as computers have become more powerful. A lot of the time I can render something in 30 minutes and it’s not really affecting my day. But at the same time the demands from directors are getting crazier – a lot more 3D holograms etc.

You look at the holograms in “Avatar” – only four years later – and it’s pretty simple in comparison, which, to be clear, is not a knock on “Avatar”. I watched that team go through making it, and it was a huge challenge, and it took a lot of people to make it. And now it’s kind of par for the course in these movies, if you look at the crazy holograms in “Iron Man” and “Prometheus” for example. As our technology to make these films progresses, the demands from directors get more elaborate. We have to always look for new ways of doing things, new ways of tackling those challenges that a new film may bring.

Kirill: So it’s never going to get to a point where the director will say that what you have is good enough and you can stop.

Paul: [laughs] I don’t think so. Usually no matter how good your first version of something is, you’re going to do 30 more versions before it gets approved. That’s good though, you’re always pushed to make something more elaborate. I don’t think anything we’re doing now is any easier than what we were doing when I started out. It’s an arms race. As the technology improves, the demands get more and more elaborate. Even if the software got to a point where designs and animations were mostly automated, I don’t think the directors would want that. They’d want a custom solution, always raising the bar.

Kirill: And there’s also the part where you and your team don’t want to repeat what’s been done in the past on other films.

Paul: Always. We always want to push the ball forward with each project so we’re not just rehashing what’s been done before, by ourselves or others.

Kirill: You’ve talked about the technical aspects of doing 3D, but what about your personal opinion as a moviegoer? Do you think it’s going to take over more genres, or perhaps it’ll remain confined to tentpole sci-fi productions?

Paul: I think it’s going to be a sci-fi, fantasy-only thing. I don’t think it’s going to become more prevalent. The audience has spoken. For huge movies like “Avatar” it works great. It was filmed in 3D which makes a huge difference – as opposed to the post-conversion process where they convert a movie into 3D later. As a moviegoer, I’d rather see a 2D version of the film than to see something that was post-converted. And unfortunately I don’t think the average viewer knows the difference, so they see “3D” in the marketing materials and they think it’s going to be the same as a truly 3D movie like “Avatar” was. I don’t think that effort of post-conversion is worth it, and I hope people vote with their wallets – showing that they don’t like it. And correct me if I’m wrong, but I don’t think that 3D television sets are selling particularly well either, because the content isn’t even there.

Kirill: It might be slightly different from movies where you pay slightly more every time you go to see a 3D movie. For a 3D TV set purchase you have to have a really good reason – such as a wealth of live and recorded content to justify paying a hefty upfront premium for the device. And that content just doesn’t exist yet.

Paul: Yes. I think in the case of huge blockbusters like “Avatar” it adds to the experience. I won’t name any names, but I can think of some post-converted movies that were very distracting. Post-conversion will hopefully go away, and true 3D will be reserved for the big-budget spectacles who can do it properly. I hope it does, as it definitely adds to the experience when it’s done well.

Kirill: I went to see “Ender’s Game” and it had two surprises. The first one was that there was no 3D version, at least in my area. And the second one was that I didn’t miss it at all, especially in the last part where they are in the simulation cave. It was staged and shot in a very immersive way that made me feel like I was right in there with Ender. I thought I’d be missing the extra dimension, but it wasn’t even necessary for me as the viewer. Maybe I’m too “conditioned” to expect these blockbusters to be in 3D “by default”, and “Gravity”, for example, is pushing that even further by having audio panning around you to follow the camera direction.

Paul: Exactly, it’s not necessary for every viewer. But that being said, if you did see that scene in 3D, I bet it would be just mind-blowing. I generally don’t buy 3D tickets, but for things where I know it was filmed in 3D, or rendered in 3D like the Pixar movies for example – those I find worth it, having millions of layers of depth. And when you have three or four layers of depth in post-conversion, I don’t find it’s worth it. I think people are going to slowly learn the difference. And we’ll never know for “Ender’s Game” unless they post-convert it.

There’s this amazing thing about movies. It’s a whole bunch of people, thousands of people coming together to form this amazing whole. And you go to the theater, and you notice these things, like audio following the camera movement and actors’ positions in “Gravity”, or the screens that we’re doing – that’s exactly what it’s all about. As individuals we’re small part of the massive whole that becomes a movie, and hopefully these details come across.

Kirill: How do all these thousands of people collaborate? As a freelancer working on a particular facet of the much bigger production, do you need to be on the set, to sit in the same physical space with your team, to be available to other departments? How much can be done remotely with the current crop of software collaboration tools?

Paul: I do enjoy sitting with people and working together in the same room. But for the past two years now I’ve worked almost exclusively from my home office. For “The Hunger Games” it was half-and-half, where I’d go to Montreal for a while and then come back here. It worked well, but the tools are sufficient enough now that there are more benefits to working remotely for me personally. You can work on your own time, which is perfect in a creative field like this. Contrary to popular belief, you actually put in a lot more hours than you would clocking in and out of an office every day.

Using tools like Basecamp from 37 Signals, Skype and Google Docs makes it so easy to collaborate online. I personally have never had to be on the sets – there are other highly skilled people on our team at G Creative who do that part of the job. As a designer and animator, it’s not really necessary for me to be there, so online collaboration actually works really well and allows me to just focus on the creative side of things.

Kirill: Is this your field for the foreseeable future – mixing screen graphics for movies and your own software projects?

Paul: That’s my plan for now. I have 3 movies yet to be released that all involved the design and animation of screen graphics. I formed a bit of a specialty before joining G Creative – and it’s their specialty as well so it’s a natural fit. I have this great opportunity to work with Gladys Tong who has over a decade of experience bringing this cool technology to life in film. And in my free time we have a lot of apps and real software we’re working on. It’s definitely a different challenge from film, and I love both aspects. I love making the fantasy UI stuff for film where we can make it as crazy as the director will let us, and not to worry too much how usable something really is. And then in my free time it’s rewarding to make something that does have the constraints of real software. It’s rewarding to make something that people can download and use in real life.


And here I’d like to thank Paul Beaudry for taking the time out of his busy schedule to talk about crafting screen graphics and user interfaces for movies.