After spending the best part of the last ten years doing desktop development, I shifted gears last December and joined the Android team at Google as a client user interface engineer. It’s been an interesting ride so far, and today i wanted to mention a few things worth keeping in mind if you are a mobile UI developer.
If you glance at the sidebar, you’ll see that my particular interest lies in creating polished, responsive and well behaving user-facing applications that help the end users achieve their goals quickly and painlessly. There are many things similar between designing and implementing desktop and mobile UIs:
- Work with a visual designer to create a polished and aesthetically appealing appearance.
- Make sure that your application remains responsive by running all long operations (such as database access, network access, I/O and so on) on background threads that update the UI when necessary.
- Handle errors in such a way that their exposure to the user is minimal. If the network is not available, switch to offline mode or exponential backoff retries. If the backend returned an abnormal condition, do not bombard the user with the stacktrace and offer automatic error reporting and recovery route.
The main difference between the desktop and the mobile lies in the form factor, how it affects user’s interaction with the hardware and, by extension, with your application.
Desktop monitors and laptop / netbooks are much bigger than phone screens. While some desktop setups allow rotating the monitor, the applications are not expected to change the internal layout – no matter if they are running fullscreen or in a smaller window. Users expect to be able to dynamically resize the application windows. Finally, only now we start seeing the hardware manufacturers push consumer grade (== cheaper) touch screen hardware; the vast majority of user interaction is done with mouse, track pad or some other non-immediate form.
Phones are small. They fit in your pocket. The user expects to rotate the phone and have the application make the best usage of the available screen estate. Also, gone are the days when flagship models are operated with high-precision pointing devices such as stylus. You interact with the applications using your fingers and, depending on how modern your phone / applications are, some support multi-touch.
This brings me to the first major difference between desktop and mobile – smaller screens and bigger controls. Since the interaction is done with the fingers, you cannot cram as many tiny UI controls as you can – no matter what is the screen density (DPI / PPI). To the right you can see a screen that creates a new calendar event. While you certainly can make the controls smaller and not require the user to scroll the form, your users will hate you for making them miss the intended target and waste their time.
When you have less screen estate, think about every control. If it is optional, consider exposing it as a preference after the flow is done, or hide it in an expandable “more” section. If your form spans more than a couple screen heights, consider splitting it in two separate screens, or even rethink how much information is really necessary to collect from the user. Also, think about portrait vs. landscape orientation, which brings me to the second major difference – rotation and ratio change.
Due to limited physical size of the screen, you are always in fullscreen mode. When you click on a URL in your Twitter client, the browser window is launched in fullscreen mode and you don’t see multiple windows at the same time. The only exception is overlay dialogs usually used to display temporary and auxiliary information (such as terms of service, for example). Another consequence of form factor is that the phone is not “restricted” to be in always-landscape mode (unlike, say, laptops which would be quite awkward to operate in portrait).
When the user rotates the phone from portrait to landscape, the currently fronted window should reflow itself to make best usage of the screen estate. In the example above, the layout of the Twitter client is clearly designed to be optimized for the portrait mode. The title bar has the global action buttons (refresh, post, search), and the action bar below it shows your current “location” in the application and context-sensitive operations. Then, you have the tab bar to switch between the different lists. Finally, the actual content is displayed in a one-column list where each cell wastes around 60% of its space. On top of this, all three bars remain anchored when the list is scrolled – which amounts to only around 20%-25% of the screen space used to display the information that the user is interested in. In this case, the design should be reworked to, perhaps, move the global and context-sensitive actions to a separate vertical column and let the list span the entire height in a separate vertical column.
The next difference has already been mentioned – user interaction with the application. There is no such thing as cursor, since there is no such thing as mouse. Hence, there is no such thing as rollover effects to provide visual feedback of which elements are actionable (as noted by Derek Powazek a few days ago). Instead, you have a quite different, and very rich interaction mode – touch. Even single-touch applications can expose a wide variety of interactions, including tap, long touch, move and fling. Android’s VelocityTracker is an indispensable utility to keep track of motion events without needing to go inside the details and history of each one. And once the fling motion is over, feed the velocity directly to ScrollView.fling() or translate it to your custom motion path.
Without the mouse, and without targettable scroll bars (that, if permanently displayed, take precious screen estate), you cannot have predictably behaving nested scrollable areas. A popular UI paradigm in screens displaying item lists – such as emails, tweets or search results – relies on the list spanning all available vertical space, where scrolling to the very bottom initiates a request for the next page of items. Instead of thinking what should happen when such a list is placed in the middle of another scrollable view with a few controls below it – revisit your design and adapt it to the mobile platform.
The fourth difference is screen density. This is the subject of many entries on this blog in the last few years (as well as the JavaOne presentation i did with Mike Swingler from Apple). Despite the early promises, desktop / laptop display hardware has not progressed as much as i expected as far as the resolution / density goes. Consumer-grade monitors are mostly in 96-120 DPI (dots per inch) range, and there is little incentive for developers to spend time and money to properly support higher-end monitors (such as 150DPI QSXGA monitors priced at around $14,000 or the discontinued 204 DPI IDTech MD22292 series that was priced around $9,000).
That’s not quite the same as far as the phones go. Existing devices available from different manufacturers vary between 120 dpi for lower end HTC Tattoo / Wildfire and 240 dpi for the higher end Droid series – difference of 100% in screen density. This means that using hardcoded values for pixels and one set of images will lead to one of two things on a higher end phone. Your UI will either be upscaled and fuzzy, or the controls will be too small (in physical units such as inches) to allow comfortable targeting with a finger. This subject is discussed at length in the “Supporting multiple screens” documentation (for Android), with two things to remember – bundle multiple resolution images and use DisplayMetrics to scale your custom drawing code.
And this brings me to the last major difference – limited CPU / memory resources. Medium range hardware in consumer-grade desktop and laptop machines comes with CPU / memory resources that even a modern phone can only dream of. Multiple cores, graphic cards with large dedicated video memory, standard 4GB RAM with cheap expansion options, fast CPUs cooled by large fans – all of these have made the user interface implementer in me quite lazy. I don’t really have to care how many objects i create during each painting (as long as i don’t do something stupid as a database operation in the middle of painting). I fully rely on large memory and fast garbage collection to take care of all these small allocations and to keep the framerate acceptable.
While things may change in the next few years, due to inevitable push from the hardware manufacturers that find themselves in a constant state of competition, today you must be prudent. If you have any custom drawing code in your View.onDraw() – do not allocate any objects in it. Romain Guy has tweeted about this a few days ago, and it has been a subject of the internal code reviews that he did with me over the last week. Any graphic object, such as path, gradient, paint, point or rectangle must be created outside the drawing flow. Objects such as Rect, Point or Path should be created in the constructor of your view and cleared / reset in the onDraw() implementation. LinearGradients and RadialGradients that depend on the width and height of the view should be recreated only in the View.onSizeChanged(). Another option is to create the gradient object once in the constructor and then change its matrix dynamically during the painting. Paint objects should be created in the constructor and have the global settings (such as antialias, color filter, dither or filter bitmap) set once. Then, the onDraw() will only set the dynamic values, such as color, alpha and stroke width. To make sure that you don’t allocate any object in your drawing cycle, run DDMS (Dalvik debug monitor) tool and switch to the “Allocation Tracker” tab.
To summarize, here are the major differences between desktop and mobile UI development, all related to the different form factor:
- Smaller screens and bigger controls
- Rotation and ratio change
- User interaction with application
- Screen density
- Limited CPU / memory resources
Thanks to Romain Guy for reviewing the early draft and completing the missing bits.