Conversation with Khoi Vinh
The conversation I’ve had with Khoi Vinh over at his site, preserved here as well.
Khoi Vinh: How did your interest in this kind of user interface design begin? Did it precede or follow your interest in the craft of “real” UI design?
Kirill Grouchnikov: It started sometime in 2011, which was when I fell in love with “Tron: Legacy.” Except for the first intro part that happens in the real world, I was obsessed with everything in that movie. I wanted to know more about how it was made, so I emailed Bradley “GMUNK” Munkowitz who did a bunch of visual effects for it, and that resulted in this article. It wasn’t an interview about screen graphics per se, as there are not that many of them in that specific movie.
But in my head I kept returning to it again and again, and I started finding web sites that were devoted to screen graphics and fantasy UIs – Fake UI, HUDS+GUIS, Interface Love, Sci-Fi Interfaces, Kit FUI and, most recently, the r/fui community on Reddit. I wanted to dig a bit deeper into what goes into making those interfaces: what is put on the screen and what is discarded, the explorations behind the decisions, how the advances in human-computer interactions in the real world of commercial companies and research labs find their way into the worlds of film storytelling, and how the fantasy UIs, in turn, find their way into the real-world explorations.
That last part is close to what I do for living—doing user interfaces on the engineering and implementation side. I love how fantasy UIs and interactions can plant seeds of “what if?” What if we could wave our hands around like in Minority Report? What would work and what would not? What if we could go beyond flat screens and operate on holographic representations of complex data? What if we could leave behind the decades-old mouse and keyboard way of “telling” the machine what to do, and find a less abstract way. This is what I love about movies like “Her,” “Ex-Machina” or “Iron Man.” They don’t have to accurately predict the future, but they could hint at where those interactions might go, and plant those seeds in the new generation of designers and engineers.
Khoi Vinh: I want to get to that aspect where the fantasy bleeds into the reality in a bit, but first, what have you learned about the way these designs are crafted for the movies? Are there common processes that the designers go through to dream them up?
Kirill Grouchnikov: Some themes seem to be repeating again and again as I talk with people who work on these productions.
The overall process of designing fantasy interfaces has certain parallels to what we do for interfaces on our real-life screens. Fantasy design is there to support the story, much like the real design is there to support the product. You have people that define the overall visual language of the movie—the director and the production designer, with perhaps the cinematographer. Then it goes through iterations, starting with explorations and ideas, and then it gets progressively closer to the look that everybody is happy with—given the time frame and the budget of the overall project.
The most important thing that comes up in pretty much every interview is that screen graphics are there to support the story. You only have so much time (on the order of a few seconds) to succinctly tell the specific point and then move on to the next shot. Very rarely do you have the luxury of spending a significant amount of time on the UI. After all, you have the highly-paid actors on the set, so your screen time is better spent having them in the frame. Incidentally, this is where translucent monitors and holograms play well—allowing the director to have both graphics and faces in the same frame. But I digress.
So something like a giant red banner with flashing “Access Denied” is a necessity. You have to quickly convey the point and not require the viewer to somehow locate the small notification somewhere on the gigantic movie screen.
And finally, there’s a desire to have something new and fresh on the screen. This is part of the creative side of the movie business; to not repeat what has already been done, even if you’re working on the next installment of a well-established franchise. This is where designers might go hunting for something new that is being explored in the labs, or for something old that can be dusted off, or for something borrowed from the natural world, like the curved elements in “Prometheus” or “Jupiter: Ascending.”
Khoi Vinh: In the case of that “Access Denied” example, what have you learned about how these designers balance the need to tell the story with plausibility and verisimilitude? There are degrees of storytelling, of course, and I think a lot of UI designers watch movies and often see stuff that’s just so clearly unrealistic that it takes us out of the experience.
Kirill Grouchnikov: I think it’s a balance between three things which are intertwined. I personally don’t dissect any particular story as being too “far out there.” As long as it establishes a consistent universe and doesn’t break the rules that it sets for itself, I am ready to believe in those rules. It’s only when the script takes a turn or a jump that is completely unsubstantiated—once again, in the framework of rules set up by the script itself—that it takes me out and I start looking at things with a more critical eye.
The second thing is that everything we see on the screen needs to support the story, first and foremost. I don’t expect a character to get stuck on a bunch of loading screens as they are interacting with their devices. It might not be too plausible for the technology that we have today, but the story needs to keep moving and not get stuck on whatever imperfections are “supposed” to stand in its way. Once again, I’m quite fine with jumping over those imperfections as long as they are trivial in the confines of the universe set out in the specific story.
And this brings me to the last point: matching the technology to the time of the story. So if it’s something like “Prometheus,” “Oblivion” or “The Expanse” that happens far enough in the future, I think we as the viewers need to be prepared to be open to technology being orders of magnitude ahead of where we are today. To draw an awkward parallel, imagine showing a documentary on today’s devices to Alan Turing. I honestly don’t know how plausible or believable he’d find what we, today, just take for granted. And then, on the other hand, interfaces in the Marvel universe or the “Mission: Impossible” and James Bond franchises can’t take too big of a leap. The action in those movies happens today, with the only difference being the budget and human talent available to the characters in them. In such movies designers can’t really go too far beyond the edge of what we have today, and I think that’s one of the factors that figures into the decision-making process behind not only the design and the screens, but the portrayal of technology itself.
Khoi Vinh: There’s another aspect to advancing the story that I’ve seen a lot in the past two decades; often, a computer interface serves as a kind of a crutch for the plot. I always balk when a tense moment relies on a progress bar getting to 100% or something; it really feels like the screenwriter didn’t really do his or her job of creating a legitimately compelling dramatic challenge for the protagonists. As these UIs become more elaborate and more visually stunning, what are your thoughts on scripts becoming more and more dependent on them to tell stories that would otherwise rely on good old fashioned plot development?
Kirill Grouchnikov: I’d say that bad writing has always been there, way before computer interfaces. A tense moment that relies on transferring some data at the very last second used to be a tense moment that relied on the hero cutting the right wire at the very last second before the bomb explodes, or a tense moment that relied on the hero ducking into some random door a second before they’d be seen by the security guard, an explosion that takes out an entire planet when our heroes are right at the edge of the blast radius, and a myriad of other similar moments.
We’re witnessing an unprecedented—at least in my view—explosion in consumer technology available to people all around the world. It is becoming hard to imagine a story told in a feature film or episodic television that is happening in modern days that does not have a screen or two in it. The stories necessarily reflect the world that we live in, and I think that a story that doesn’t have technology in it would need to actually justify that decision in some way.
Good writing that creates a tight plot with no gaps or unsubstantiated “leaps of faith” is hard, and it’s always been hard. Technology in general, and devices and their screens in particular, are indeed used more to paper over those gaps. Of course for us it’s ridiculous when somebody talks about creating a GUI interface using Visual Basic to track an IP address. But I think the writer that came up with that line would have come up with similarly hand-wavy way to advance the story forty years ago, relying on some random tip from a gas station clerk or the overused trope of “killer always shows up at the funeral, so this is where we’ll catch them.”
The worst of this bunch for me was the climax scene in “Independence Day” where a human-made computer virus was able to take down all the alien ships. The same guy who wrote and produced that movie is doing the upcoming sequel, but I hope that we’ll see something a bit less inept this time around.
Khoi Vinh: It’s a fair point that there’s always been bad writing. I guess the difference though is that in the analog age, very, very few people ever actually had to defuse a bomb. Whereas today, everyone uses phones, laptops and who knows what kind of screens on a daily basis. So a screen as a plot device is much more familiar, yet it seems like the way a lot of movies overcome that quotidian nature is by trying to make them seem more fantastical, rather than trying to make them seem more believable. In some ways, I feel like everyone learned the wrong lessons from “Minority Report,” which made a really concerted effort to craft plausible interfaces. Other moviemakers just went off and made over-the-top UIs with no basis in research or theory. Am I overthinking this?
Kirill Grouchnikov: How many times have you seen somebody—a good guy or a bad guy, depending on the situation—shooting a desk monitor or even a laptop screen, implying somehow that this is the way to destroy all the information that is on the hard disk, when that disk remains completely unharmed by those bullets? Or even worse, in our age where everything is in the cloud, there’s no point in harming those screens to begin with.
What I’m trying to say is that screens themselves are just a skin-deep manifestation of the information that surrounds us. They might look fancy and over-the-top, as you say, but that’s a pixel-level veneer to make them look good for the overall visual impact of a production. I think that most of what those screens or plotline devices are trying to do is to hint at the technological capabilities available to people or organizations who are operating those screens. That’s where it goes to what I was saying earlier: the incredible advances in technology in so many areas, as well as the availability of those advances to the mass consumer market.
If you look at a movie such as “Enemy of the State” from 1998 and the satellite tracking capabilities that it showed us, that was pretty impressive for its time. Back then GPS had a very significant difference in accuracy available to military devices (PPS) and civilian devices (SPS); publicly available signals had intentional errors in the range of 100 meters. That limitation was lifted two years after the movie came out (aka correlation vs. causation), and now hardly anybody would be impressed by being able to navigate your way without having to struggle with a foldout map. And of course, now that mobile phones sell in the billions, everybody can be tracked by triangulating signals from cell towers. That’s not impressive anymore, so that’s an example of a crutch that has been taken away from the script writers.
I don’t think that the question here is about how plausible the screens are, but rather how plausible the technology that those screens manifest is. So you’d be talking about the AI engine that is J.A.R.V.I.S. in “Iron Man,” or the AI engine that is manifested as Samantha in “Her,” or being able to track the bad guys via thermal scans in a 3D rotating wireframe of a building in any number of action movies, or even the infamous zoom-rotate-enhance sequences in low-budget procedural crime drama on television. The interface bits are just the manifestation of the technology behind the screens. Are we close to wide consumer availability of J.A.R.V.I.S.-like software? There are certainly a lot of companies working on that.
When we look at the devices around us and see all the annoying bits, and then see those annoying bits not being there in a feature film, that is quite believable in my opinion. So something like not needing to charge your phone every evening or getting a strong mobile signal as you’re driving on a deserted road gets a pass. When we look at the devices around us, and see those capabilities pushed just a few steps beyond the edge of what we see right now, that is quite believable as well. Especially with the recent revelations on the state-level surveillance programs, I as a viewer am not surprised to see similar technology taken a couple steps forward in Bond, Bourne or other similar spy-action thrillers.
Khoi Vinh: Okay so let’s talk about how these fantasy interfaces are bleeding into reality. You cite the “Enemy of the State” example, which was prescient by two years. “Minority Report” and “Her” and J.A.R.V.I.S. from the “Iron Man” films are all often cited as being very influential. Are these fantasy UIs actually driving real world ideas, or are they just lucky? And are the designers who are creating these fantasy UIs aware of this “life imitates art” cycle?
Kirill Grouchnikov: “Minority Report” certainly benefited from the very impressive body of research that went into trying to predict the world of 2054 and its technology. I would say that that movie is by far the most influential in terms of how much real-world research it has ignited ever since it was released.
The AI engines in “Her” and “Iron Man” seem to hover just a few years ahead of what is happening right now in the world of machine learning and information analysis. You look at speech recognition and assistive technologies being developed by leading tech companies, and it would seem that the portrayal of speech-based interfaces in movies feeds itself on those real-tech advances. As for the AI capabilities themselves, there was to be a lot of overpromise and underdeliver in the ’80s and the ’90s. And then you have AlphaGo beating one of the world’s best human players in a game that looked to be unreachable for machines just a few short months ago. Of course, that’s still not the general-purpose artificial intelligence that can serve reliably as your companion or advisor, and that one is still in the realm of science fiction for now.
You might call it prescience or luck when a movie correctly portrays evolution of technology that was a few years out. But for every such movie, you have something like “The Lawnmower Man” that was made smack in the middle of the hype-wave around virtual reality. It took the industry a couple decades to get to the state where we can actually start talking seriously about augmented, virtual or mixed reality on mass scale. And even now it’s not clear what would be the “right” interaction model, both from hardware and software perspective.
As I mentioned earlier, I love how movies can plant seeds of ideas in our heads. People designing for movies and TV shows take seeds of ideas from the real world, and build their interfaces from those seeds. And then it flows the other way. Those seeds that have blossomed into something that we don’t have through a process of mutation and combination now plant their own seeds in the minds of people that ask themselves “What if?” What if I could have a camera mounted on my screen that would track my fingers, my eyes or my overall gestures? What can that enable, and how that would actually evolve in the next five, ten or even twenty years?
There are all these grand predictions from self-proclaimed futurists about where we will be in fifty years. They sound quite attractive, but also completely unverifiable until we get there. Go back to 1973 when Motorola made DynaTAC and show me one documented, correct prediction of where we are now with complete domination and unbelievable versatility of mobile devices. In some movies art imitates life, or at least takes it a few steps forward. Some movies might be so influential as to spur interest in bringing their art to life.
Martin Cooper who developed the first mobile phone at Motorola in 1973 said he was inspired by the communicator device on “Star Trek.” It then took a few decades of real-world technological advances to get to the stage where everything now is just one slab of glass, which evokes another device from “Star Trek”—its tablet computers. And if you look at the universal translator from the same TV show, the technology is almost there, combining speech recognition, machine translation and speech synthesis. And when—not if—that entire stack is working flawlessly, it will be just a small part of some kind of universal device that will evolve from the mobile devices that we have now. So in a sense, life doesn’t imitate art, but is rather inspired by it, and takes it to places that we didn’t even dream of before.
Khoi Vinh: So is “life imitating art” the best way to judge fantasy UIs, then? As you’ve said, the first priority for this work is supporting the narrative, but over time, the more believable ones are the ones that seem to retain their respectability—or to put it another way, the least believable ones just start to seem ridiculous and even laughable. How should we be evaluating this work?
Kirill Grouchnikov: “Evaluate” is a bit loaded. I don’t know if you’d really want to reduce fantasy UIs to a single number like MPG for cars or FICO score for credit risk. There are just too many aspects to it. I think that first and foremost it needs to support the narrative. And that’s true for any other aspect of storytelling in movies and TV shows. If anything, and I mean anything, takes you out of the story, that’s bad craftsmanship. Everything needs to fit together, and this is what amazes me so much in movies and shows that are done well: to see dozens or even hundreds of people come together to work on this one thing.
And then, as you’re saying, there are things that stay with you afterwards. Things that are particularly compelling, be it “The Imperial March” from the original “Star Wars” trilogy, the cinematography of “Citizen Kane,” or the fantasy UIs of “Minority Report.” You might also say that a lot of times these things are timeless, at least on the scale of a few decades. It’s a rare thing, really, given the pace at which the world of technology is evolving. They don’t have to accurately predict that technology around us, but rather present, like you say, something that is believable not only in the particular movie universe they were born in, but also in the world around us. I certainly wouldn’t mind having J.A.R.V.I.S. in my life. Or at least to experience what it would be like to have such an intelligent entity at my disposal, to be able to judge its usefulness by myself.
Khoi Vinh: Looking ahead, a lot of people predict that VR will become a viable form of cinematic storytelling. If one buys that, how will it impact the work of crafting fantasy UIs? Crafting something believable but still fake seems more difficult when it can be experienced from any angle or even at any distance.
Kirill Grouchnikov: It’s like you’re reading my mind, because I’ve been thinking about this a lot recently. There’s obviously a lot of interest in augmented reality, virtual reality and mixed reality in the last few years. Most of it for now seems to be concentrated in exploring that world in the gaming industry. This particular one isn’t of much interest to me, as I moved away from it when my kids were born. There’s just not enough free time left in the day to split between games and movies, so I chose movies!
I keep on thinking about the early years of making movies and telling stories in that medium, about how people first started taking what they knew from the world of stage theater and applying that in film. You had long uninterrupted takes where the camera wouldn’t move, wouldn’t pan, wouldn’t zoom. It was like watching a theatrical show as a viewer sitting and not moving. It took a few decades and a couple of generations of movie makers that were “born,” so to speak, into the medium of film, to find a completely new set of tools to tell the stories on film. Some things didn’t work for the viewers, and some things took perhaps a bit of experimentation and time to refine so that the viewers could understand certain shortcuts and certain approaches to convey specific intentions.
I honestly can’t tell if VR will be a big part of the future of “cinematic” storytelling. If you look at the last decade, 3D has been hailed as the future of the medium. But it’s been abused so much that we as the viewers are very wary of seeing the 3D sticker slapped onto a production as almost an afterthought, as a way to add a couple of bucks to the price of the ticket without adding anything substantial to our experience.
My hope is that VR will have a bit more time and a bit more patience given to it by the big studios. It needs a lot of both to develop the vocabulary needed to tell the stories in a format that feels native to that experience. Your question is about the fantasy UIs and screen graphics, but that’s a very small part of the overall craft of telling a story. There will undoubtedly be a lot of false starts where people are eager to get on the VR bandwagon and they will just take the existing ways of telling stories and apply them as-is to this new world. If there’s too much of this, this whole thing will be dead before it has a chance to develop itself as a completely new way of storytelling.
But if it evolves at a slower pace, perhaps starting with shorter productions, then I hope that in my lifetime I will get to experience stories that are presented in a completely different way from what we are used to nowadays. It doesn’t mean that it has to completely supplant what we have now—stage theater is quite alive even after a century of movies. It might not be something like a “VR movie”; it will be a completely different experience under a completely separate name. If that happens, the way user interfaces are presented will evolve along with it. And you know, who knows what will happen to how we interact with machines and information in the next few decades? Maybe we will have screens—rectangular, arbitrarily shaped, translucent, transparent. Maybe it will all be holograms, maybe every surface around us is a screen, and maybe there will be some breakthrough and there are no screens. Maybe a screen is just an unnecessary intermediary whose days and years are counted.
So by the time we get to true VR experiences in cinema, fantasy UIs will be hinted at along the lines of “Her” and that will be completely natural to the current world at that time because screens will be long gone. It’s just too far ahead into the future, to be honest.