Animations – the big picture

January 11th, 2010

Over the course of this week i’ve talked about movements of physical objects in the real world, and how they can be applied to animating pixels on the screen. The last two entries have just skimmed the surface of animating UI objects, and even such straightforward areas as color animations and scroll animations can be much deeper and more complicated than it originally seems.

Every movement in the real world is governed by the laws of physics. Sometimes these laws are simple, and sometimes they are not. Understanding and emulating these laws in the virtual world of pixels takes time. It takes time to analyze how the objects in the physical world move. It takes time to find the right physical model for the specific dynamic change on the screen. It takes time to implement this physical model in the code. It takes time to optimize the implementation performance so that it is fluid and does not drain too much device power. Is it worth it?

Is it worth spending your time as a designer? It it worth spending your time as a programmer? Is it worth spending your time as a tester? If you care about your users, the answer is a resounding yes.

People don’t read documentation. Nobody has time do to it, and it gets worse with every passing year. We are confronted with too much information, and the average attention span keeps on shrinking. A recent trend towards separation between data providers and application providers makes it incredibly simple for people to switch between different views on the data of their interest (think Twitter clients, for instance). People will start using your application, play with it for a few moments (minutes, if you’re lucky) and move on to the next shiny thing on a whim. How can you capture such a volatile audience?

Intuitive design is a popular term in the user experience community. Alan Cooper writes the following about intuition in his About Face 3:

Intuition works by inference, where we see connections between disparate objects and learn from these similarities, while not being distracted by their differences. We grasp the meaning of the metaphoric controls in an interface because we mentally connect them with other things that we have already learned. This is an efficient way to take advantage of the awesome power of the human mind to make inferences. However, this method also depends on the idiosyncratic human minds of users, which may not have the requisite language, knowledge, or inferential power necessary to make those connections.

What does it mean that the given interface is intuitive? You click on a button and it does what you expected to. You want to do something, and you find how to do it in the very first place you looked at. The only surprises that you see are the good ones. The application makes you feel good about yourself.

This is definitely not easy. And you must use every available tool that you can find. How about exploiting the user itself to make your job easier? How about building on the existing knowledge of your users and their experiences with related tools in either real or virtual domain? As quoted above, not all knowledge and not all experiences are universal, but the animations are.

We all live on the same planet, and we are all governed by the same physical laws. Applying these laws to the changes in your application (in color, shape, position etc) will build on the prior knowledge of how things work in the real world. Things don’t move linearly in the real world, and doing so on the screen will trigger a subconscious response that something is wrong. Things don’t immediately change color in the real world, and that’s another trigger. Moving objects cannot abruptly change direction in the real world, and that’s one more trigger. A few of these, and your user has moved on.

Drawing on the existing user experience is an incredibly powerful tool – if used properly. Some things are universal, and some things change across cultures. Distilling the universal triggers and transplanting them to your application is not an easy task. It requires a great deal of time and expertise from both the designers and programmers. And if you do it right, you will create a friendly and empowering experience for your users.

NeytiriIn a roundabout way, this brings me to the visual clues that are pervasive throughout the Avatar movie. If you haven’t seen it yet, you may want to stop reading – but i’m not going to reveal too much. Our first exposure to the Neytiri – the native Na’vi – is around half an hour into the movie. Apart from saving Jake’s life, she is quite hostile, and she does not hide it from him. What i find interesting is how James Cameron has decided to highlight her hostility. It is not only through her words and acts, but also through the body language, the hand movements and the facial expressions. They are purposefully inhuman – in the “not regularly seen done by humans” sense of word. The way she breaks the sentences and moves her upper body with the words, the movement of facial muscles when she tells him that he doesn’t belong there, and the hand gestures used throughout their first encounter certainly do not make the audience relate to her. On the contrary, the first impression reinforced by her physical attitude is that of hostility, savageness and animosity.

The story, however, requires you to associate with the plight of the locals, and root against the invasion of the humans who do not understand the spiritual connectivity of the Na’vi world. The love interest between Jake and Neytiri is a potent catalyst, and it is fascinating to see how Cameron exploits the human emotions and makes you – in words of Colonel Quaritch – betray your own race. If you have seen the movie, imagine what it would feel like if Na’vi looked like real aliens – from Ridley Scott / David Fincher / the very same James Cameron saga. It would certainly cost much less money to produce, but would you feel the same seeing two ugly aliens falling in love and riding equally ugly dragons?

NeytiriNa’vi look remarkably humanoid – just a little taller. The only outer difference is the tail and the number of fingers. Other than that – a funky (but not too funky) skin color, the same proportion of head / limbs / torso, the same facial features, no oozing slime and the same places where the hair doesn’t grow. It has certain technical advantages – mapping the movements of real actors onto the Na’vi bodies – including the facial expressions. In Avatar, however, there is a much deeper story behind the facial expressions. Cameron starts building on your prior experience with outward expression of human emotion in order to build your empathy towards Na’vi cause, and make you root for them in the final battle scene. How likely is it that Na’vis have developed not only the same body structure, but the same way positive emotions are reflected with facial expressions?

Our ability to relate to other human beings is largely based on our own experiences of pain, sorrow, joy, love and other emotions. Neytiri displays remarkably human like emotions – especially throughout the Ikran taming / flying scene half way into the movie. Cameron uses our own human emotions to guide us into believing in Na’vi cause, and this is achieved by building on the universally human vocabulary of facial expressions. To believe the story, we must believe in the characters, and what better way to do so than making us associate with both sides of the relationship between Jake and Neytiri.

Make your users productive. Make them happy that they have spent time in your application. Make them want to come back and use your other products. Make an emotional connection. Build on what they know. Make them believe that every choice they make is their own. Or better yet, guide them towards where you want them to go while making them believe that they are in charge.