Pushing pixels, ten years ago
One thing leads to another. It might not happen right away, but things part, drift and fall back in quite unpredictable ways. I’ve spent the first four years after high school studying geodesy, and finding out that I really wanted to be a programmer. Another four years and a degree in computer science later, the two crossed together in my first job.
My resume was, well, padded. During my last year of CS studies I took what was called a Software Lab, or rather two of them. In both I’ve worked with PhDÂ students, writing software to put their theories to test. Both were about formal verification, and both involved computing and plotting proof graphs. I was also doing a part-time job on an internal research project that explored automatic translation between various academic and commercial verification toolkits. The general advice is to put the last two or three jobs on your resume and skip the rest, and I didn’t have much to skip. But it didn’t really matter.
As I sat down for my first interview, the guy across from me took one look at the first page of my resume and said “So you studied geodesy.” And that was pretty much it. I guess there were not a lot of computer science graduates coming across his desk that also could maintain an informed conversation about the UTM coordinate system and the difference between ED50 and WGS72 geoid models. The girl I was supposed to replace would not be leaving for another three months, but it didn’t really matter. They found me a space in between desks, and she was almost on her way out in any case.
Those were the days of AIX and Windows NT. I didn’t do much in the way of “traditional” user interfaces. Sure, I’ve thrown together a few simple wizard dialogs, a couple of buttons, a couple of checkboxes, a combobox here and there. But most of the time I was working with map data, reading it from various sources and drawing colored polygons on a map canvas. After my first project was done in early 2002, I’ve transferred to a different project that was just starting up. A project that would change the way I’ve looked at the pixels every since.
It was a throw-it-all-away and rewrite-everything-from-scratch kind of project. There were endless meetings about what technologies to choose and how to combine them together. XML was on the rise. Java was on the rise. Servlets were on the rise. UML was on the rise. And you know, you put a few senior architects together in the same room for a few months, and things are bound to get complicated. As I joined the project, the decision was done to do a hybrid client, Java below the surface and Delphi above the surface. Java code would do the heavy lifting of data transfer, persistence, querying and caching. But the UI requirements called for presenting complex ways to interact with data, and Swing was lacking in the department of commercial UI component libraries. On the other hand, Delphi had a surprising (at least for me) variety of very powerful component libraries from a wide variery of commercial vendors. Some of them even came with auto-generated binder layers that would allow Delphi components to do two-way data communication with Java code.
I was living in the Delphi world, still mostly working on the pixel level of map canvas. The guy next to me was doing the UI chrome around the canvas. One day I glanced at his screen and saw a bunch of calls to draw lines, rectangles and arcs. And the answer to my lazy “what’s that code mess doing?” was “drawing toolbars.”
That was late 2002. Microsoft Office was as popular as it would ever be, and the way it addressed the ever increasing feature growth and how it was exposed to the user was, for better or worse, the leading industry example. Office 97 has converged menus and toolbars into a single unified concept, and Office 2000 took the feature bloat to fight by introducing adaptive menus, adaptive toolbars and rafted toolbars (where one single toolbar row would fit two or more toolbars, and buttons would go in and out of the overflow menu based on frequency of use). Office 2003 has introduced task panes, and Windows XP that came out a year earlier was an enormous success for Microsoft. These two also marked a step away from flat, rectangular, steel gray control surfaces, adding softer corners, gradients and drop shadows. The success of both Office and Windows was undeniable, and vendors of UI component libraries were expected to match the sophistication of the Office user interface in both feature set and appearance.
I’ve never thought about the underlying implementation of core UI components. They were there to use, and that was it. Sure, Motif or Windows NT buttons look ugly now, but back then? I didn’t care much about the aesthetic appearance of the interface. That might also explain my lack of interest in how those components were actually drawn on the screen. Not that there was anything fancy to draw – just flat rectangles and a couple of darker outlines. Windows XP changed that. Office 2003 changed that. But if you were to ask me about how those are drawn, I would shrug and say “Who cares? It’s just a service provided to you by the operating system.”
While the operating system provided a certain set of UI components, those were not enough to create a fully-featured UI seen in Office 2000 or Office 2003. And our architects were pretty adamant to see all those and more. Adaptive toolbars that can be stacked vertically, horizontally and on all sides of the screen? Yes. Collapsible task panes stacked in an accordion, interleaving toolbars and mini-map canvas? Yes. Pivot grids with multiple nested child grids, frozen columns and auto-filtering? Yes. And many more. And the best thing? There was so much competition between Delphi component vendors that you could have all that and more for quite reasonable prices. And most came with an option to purchase the source license as well. And this is how I saw the underlying implementation of those components.
When I looked at the full implementation of the specific component library that we ended up purchasing, it was a shocking eye-opener. I had to go back and ask the guy if what I was seeing was indeed true. That they had to listen to every single type of mouse event for proper handling of mouse actions. That they had to listen to every single combination of keyboard strokes and modifiers to perform matching operations. That they had to compute the bounds of every single element within the specific toolbar, within each toolbar row and within each window chrome edge.
That they had to draw every single pixel of the component representation on the screen. Every single pixel of a toolbar outline, drop shadow, separator and overflow indicator. To precisely match the color of every single pixel. To precisely emulate the amount and texture of drop shadow so that it would match the appearance of XP and Office counterparts. To track every single mouse event to emulate the rollover, press and select events, drawing the inner orange highlights to match the appearance of XP. One pixel, one arc, one line, one rectangle at a time.
Pixels are magic. When you can control the color of every single pixel on the screen, you can do anything. But it’s also a lot of mundane, dirty and, at times, grunt work. I did my own TrueType rendering engine once, abandoning it when I saw how much work is involved in implementing the hinting tables. I did my own compositing engine with support for anti-aliasing and various Porter-Duff rules. I did my own component set and my own look-and-feel library. I did all of these because ten years ago I saw the pure power of pixels. And if you ever wondered why my blog is called “Pushing Pixels”, now you know why.