August 23rd, 2012
Indie devs keep on bitching about how they’ve helped bring Twitter to the masses and invent some of the features later adopted for official use. Stop bitching. It was a two way street. You amassed reputation, carved out a name for yourself, collected tens of thousands of followers on twitter, your blogs and your podcasts. Some clients were so popular that offers were made and acquisition deals were signed.
It was a community for a few, and then the VC money came in and they want to see a whole different kind of profit. Is it a different community than it was a few years ago? Of course it is. You grew in your popularity and influence, and Twitter did the same. It’s no more yours today than it was yours back then.
Seth Godin talks about tribes. There are two types of tribes that are relevant here. For each such indie dev, there’s his consumer tribe that is interested in following his stream. And then there is also the creator tribe encompassing like-minded devs, sparkling dialogs and conversations with threads that involve some people from their consumer tribes as well. In a sense, there is also a more amorphic meta-consumer tribe associated with that creator tribe, where people interested in following one creator are more likely to follow similarly oriented creators.
The creator tribe that I follow on Twitter is all ready to move to app.net, and yet there are only two people seriously talking about leaving Twitter. Because that creator tribe is ready to move, but not ready to lose its meta-consumer tribe. And so it stays, and keeps on bitching about staying.
August 23rd, 2012
One thing leads to another. It might not happen right away, but things part, drift and fall back in quite unpredictable ways. I’ve spent the first four years after high school studying geodesy, and finding out that I really wanted to be a programmer. Another four years and a degree in computer science later, the two crossed together in my first job.
My resume was, well, padded. During my last year of CS studies I took what was called a Software Lab, or rather two of them. In both I’ve worked with PhD students, writing software to put their theories to test. Both were about formal verification, and both involved computing and plotting proof graphs. I was also doing a part-time job on an internal research project that explored automatic translation between various academic and commercial verification toolkits. The general advice is to put the last two or three jobs on your resume and skip the rest, and I didn’t have much to skip. But it didn’t really matter.
As I sat down for my first interview, the guy across from me took one look at the first page of my resume and said “So you studied geodesy.” And that was pretty much it. I guess there were not a lot of computer science graduates coming across his desk that also could maintain an informed conversation about the UTM coordinate system and the difference between ED50 and WGS72 geoid models. The girl I was supposed to replace would not be leaving for another three months, but it didn’t really matter. They found me a space in between desks, and she was almost on her way out in any case.
Those were the days of AIX and Windows NT. I didn’t do much in the way of “traditional” user interfaces. Sure, I’ve thrown together a few simple wizard dialogs, a couple of buttons, a couple of checkboxes, a combobox here and there. But most of the time I was working with map data, reading it from various sources and drawing colored polygons on a map canvas. After my first project was done in early 2002, I’ve transferred to a different project that was just starting up. A project that would change the way I’ve looked at the pixels every since.
It was a throw-it-all-away and rewrite-everything-from-scratch kind of project. There were endless meetings about what technologies to choose and how to combine them together. XML was on the rise. Java was on the rise. Servlets were on the rise. UML was on the rise. And you know, you put a few senior architects together in the same room for a few months, and things are bound to get complicated. As I joined the project, the decision was done to do a hybrid client, Java below the surface and Delphi above the surface. Java code would do the heavy lifting of data transfer, persistence, querying and caching. But the UI requirements called for presenting complex ways to interact with data, and Swing was lacking in the department of commercial UI component libraries. On the other hand, Delphi had a surprising (at least for me) variety of very powerful component libraries from a wide variery of commercial vendors. Some of them even came with auto-generated binder layers that would allow Delphi components to do two-way data communication with Java code.
I was living in the Delphi world, still mostly working on the pixel level of map canvas. The guy next to me was doing the UI chrome around the canvas. One day I glanced at his screen and saw a bunch of calls to draw lines, rectangles and arcs. And the answer to my lazy “what’s that code mess doing?” was “drawing toolbars.”
That was late 2002. Microsoft Office was as popular as it would ever be, and the way it addressed the ever increasing feature growth and how it was exposed to the user was, for better or worse, the leading industry example. Office 97 has converged menus and toolbars into a single unified concept, and Office 2000 took the feature bloat to fight by introducing adaptive menus, adaptive toolbars and rafted toolbars (where one single toolbar row would fit two or more toolbars, and buttons would go in and out of the overflow menu based on frequency of use). Office 2003 has introduced task panes, and Windows XP that came out a year earlier was an enormous success for Microsoft. These two also marked a step away from flat, rectangular, steel gray control surfaces, adding softer corners, gradients and drop shadows. The success of both Office and Windows was undeniable, and vendors of UI component libraries were expected to match the sophistication of the Office user interface in both feature set and appearance.
I’ve never thought about the underlying implementation of core UI components. They were there to use, and that was it. Sure, Motif or Windows NT buttons look ugly now, but back then? I didn’t care much about the aesthetic appearance of the interface. That might also explain my lack of interest in how those components were actually drawn on the screen. Not that there was anything fancy to draw – just flat rectangles and a couple of darker outlines. Windows XP changed that. Office 2003 changed that. But if you were to ask me about how those are drawn, I would shrug and say “Who cares? It’s just a service provided to you by the operating system.”
While the operating system provided a certain set of UI components, those were not enough to create a fully-featured UI seen in Office 2000 or Office 2003. And our architects were pretty adamant to see all those and more. Adaptive toolbars that can be stacked vertically, horizontally and on all sides of the screen? Yes. Collapsible task panes stacked in an accordion, interleaving toolbars and mini-map canvas? Yes. Pivot grids with multiple nested child grids, frozen columns and auto-filtering? Yes. And many more. And the best thing? There was so much competition between Delphi component vendors that you could have all that and more for quite reasonable prices. And most came with an option to purchase the source license as well. And this is how I saw the underlying implementation of those components.
When I looked at the full implementation of the specific component library that we ended up purchasing, it was a shocking eye-opener. I had to go back and ask the guy if what I was seeing was indeed true. That they had to listen to every single type of mouse event for proper handling of mouse actions. That they had to listen to every single combination of keyboard strokes and modifiers to perform matching operations. That they had to compute the bounds of every single element within the specific toolbar, within each toolbar row and within each window chrome edge.
That they had to draw every single pixel of the component representation on the screen. Every single pixel of a toolbar outline, drop shadow, separator and overflow indicator. To precisely match the color of every single pixel. To precisely emulate the amount and texture of drop shadow so that it would match the appearance of XP and Office counterparts. To track every single mouse event to emulate the rollover, press and select events, drawing the inner orange highlights to match the appearance of XP. One pixel, one arc, one line, one rectangle at a time.
Pixels are magic. When you can control the color of every single pixel on the screen, you can do anything. But it’s also a lot of mundane, dirty and, at times, grunt work. I did my own TrueType rendering engine once, abandoning it when I saw how much work is involved in implementing the hinting tables. I did my own compositing engine with support for anti-aliasing and various Porter-Duff rules. I did my own component set and my own look-and-feel library. I did all of these because ten years ago I saw the pure power of pixels. And if you ever wondered why my blog is called “Pushing Pixels”, now you know why.
August 21st, 2012
I don’t remember much from my first few years of programming. My father would bring me the latest issue of “Scientific American”, and I’d avidly read A. K. Dewdney‘s column. Sometimes it would be just reading, as in the case of the caricature algorithm. Sometimes it would be actually putting the pseudo-code walkthroughs into the real Basic code. I might have done a couple of Game of Life and Mandelbrot attempts, but I don’t remember having fun just translating those algorithms into code. Somehow, they were not mine, and I would spend most of the time playing games. Until I got my hands on the Commodore, that would involve putting an audio tape in my small tape player and praying that it would not chew the tape in the middle of the load. “Boulder Dash” was my favorite.
When I was in junior high school, the schedule would have our entire class spend one day a week in an external computer facility. There was no formal training, but rather a random exercise given every week. I remember an exercise to print out the multiplication table, where most of the time would be spent on right-aligning numbers in each column. We spent the rest of the day goofing around and playing Xonix. And then came the final exercise in the senior high school. The teacher had a list of “graduation” projects. Each project would be done by a pair of students, and each pair was free to select any project they wanted. My partner, Maxim, was also my academic “nemesis”.
Our math teacher has introduced the concept of differential in the last few weeks of junior high. The final exam had a few of those, and a bonus question. After solving the regular questions, I’ve spent a what seemed like an eternity trying to crack that bonus question. I failed. After handing out the final scores, our math teacher showed the solution that involved the opposite of differential – integral. He wanted to see who would be able to invent the notion of the integral based on our knowledge of differential. I failed. The other guy, my nemesis? He solved it. Oh well, he was better at math than me.
Back to the final project in the computer lab. Our computer teacher has told me and Maxim that she would be really happy if we were to choose a particular project. I remember that project as if it happened yesterday. It was about scheduling classes, teachers and classrooms. The program would get a list of constraints – the capacity of each classroom, when each class (from a different school) comes to this installation, teacher preferences for days of the week and classes, teacher vacations etc. Then it would print out a few possible allocations of teachers to classes to classrooms to days of the week. Me and Maxim spent some time talking about this, and we had no idea how to do this. She might as well have asked us to write a chess program to beat Karpov. We chose a different project. She was disappointed.
My path after finishing the high school took me to study geodesy and cartography. I’ve spent the next four years studying spherical trigonometry, photogrammetry, general relativity and how it affects the GPS calculations, and more map projections than I ever thought existed. But there was one course that was my absolute favorite. The teacher was a quiet, old, balding gentleman that taught us programming in Fortran 77. It was not general programming, much like the language itself. It was all about implementing efficient solutions to various math problems, like inverting sparse matrices or normalizing spherical harmonics. I didn’t care much about writing clear and clean code. I cared much more about writing clever code. The type of code where I would have four goto statements in three nested loops. The type of code where you would look at it a week after you wrote it and you had no idea how it was supposed to work. You know, fun code. Because the assignments themselves were not fun. They were about implementing the solution, not finding it. The solutions were already found long ago.
I didn’t complete that degree. I dropped out after four years, with one and a half more to go. I knew what I wanted to study, and I knew what I wanted to be. And most important, I knew what I didn’t want to be. Geodesy is an old discipline. The basic principles have been studied and defined in the 19th century. There is not much innovation in the field, or at least not from what I could see. In one of our courses, the very first thing the teacher told us was that we have to buy a pocket calculator that has 12 digits of precision. He said that if we have one of those, we’ll pass his exam. That calculator is still me to serve as a reminder. My work future was going to be dull and unchallenging. And so I dropped out.
I started afresh. I went to study at the computer science department, starting anew. I was only able to transfer three course grades, but it didn’t matter. I knew that I wanted to study programming full time, and I knew that I wanted to be a full time programmer after that.
My first computer course was “Introduction to C”. Unlike my previous ventures into learning programming, that was actual formal education. And in the middle of the semester we were hit with recursion. I confess. I didn’t get it at the beginning. Sure, I wrote down everything the teacher said during that lecture. And then I stared at the home work. The exercise was to read in a definition of a two-dimensional maze and find a path through it. And in the beginning it was the same wall that I hit back then in senior high. And then I read the lecture again, and again, and started actually doing it. Because I wanted to get a good grade. And then I finally got it. “The Matrix” wouldn’t come out for another few years, but I had the Neo moment. I knew recursion.
In the years since I interviewed a few dozen people. My main technical question was always about recursion. I might be biased. I think that you can’t be a programmer if you don’t get recursion. I might be biased because it was this mark of shame that I’ve carried with me since senior high. I might be biased because I was not able to “invent” recursion back then, and I had to be shown what recursion is.
I took a lot of different courses that showed me the limitations of what I get. Past certain point, my brain just seemingly stops functioning. Elliptic cryptography, packet switching, cross-mapping of NP problems, finite automata. I know enough to truthfully say that I know nothing. Sure, I can maintain a dilettante conversation about any of these, but my inner eyes just glaze when I try to read about the recent advances in any of these. But finally getting recursion was the point where I saw myself as a programmer.
Some people will say that I’ve wasted four years of my life studying geodesy. Some people will say that these are the years that I’m never going to get back. But I don’t see it that way. Those are the years that have showed me what I wanted to do with the rest of my life. I might have lost them not doing what I loved to do, but that was how I knew what I did love. As Frederick Phillips Brooks says in the epilogue of “The Mythical Man Month”, there are only a few people who have the privilege to provide for their families doing what they would gladly be pursuing free, for passion. I have traded a professional life full of dull moments and unfulfilled potential for a shorter, but much happier one. A life where I look forward to doing what I do every single morning. For as long as I am wanted, and beyond that.
August 20th, 2012
It must have been around 1985. I was in the sixth grade at the time. We’ve heard about computers, of course. Those were the beasts that would guide spaceships into the orbit, calculate flow fields around ballistic missiles or control high precision machinery. Nobody had a personal computer or a game console at home, at least among the kids that I knew in my school. The first three “Star Wars” movies were banned by the Soviet censorship, presumably because it would incite a grassroots rebellion against the evil empire. Somehow “The Terminator” got through the censorship blockade, and even on a small TV screen it was a magnificent experience, especially compared to the state of visual effects in the contemporary Soviet-block sci-fi movies.
One day, a teacher fell ill. They had to find something to do with a class of 11-year old boys and girls, and it so happened that a computer room was free. I never knew that our school had a computer room, and we’ve never set our foot in that computer room again. But that day has changed my life.
Three row of computers, with monochrome dot monitors. There were more of us than the computers, but not by much. And it so happened that I had a computer all to myself. The substitute teacher was a young lady and she showed us something magical. A computer game. It sounds trivial today, but back then it was truly magical. A computer, all to myself, and I can play on it? But there was a catch. There always is, at least that’s how I remember my school days.
The teacher said that she will load a game on our computers, but only after we’re done writing a computer program. A simple one, or so it would seem to all of us. A program that would get eight numbers as the input, each pair specifying a point on a two-dimensional plane, and would print out the coordinates of the intersection of two lines defined by these four points. And you know, after reading about computers guiding cosmonauts to space, running complex fluid simulations or powering a freaking robot from the future, it seemed so easy. She explained how to type a program and make it run, and how to handle user input. And then we were on our own.
I remember sitting there in front of the computer, thinking to myself that there’s supposed to be a button somewhere on the keyboard. A button that, when pressed, knows what to do with those eight numbers. How to treat them as coordinates on the two-dimensional plane, and how to output the coordinates of the intersection. Because that’s trivial. I mean, computers guide our space ships and ballistic missiles. I might not have the most powerful computer in front of me right here, but it surely knows how to compute that intersection point. Right? Right?
I eventually gave up and raised my hand. When the teacher came over, I asked her how do I make the computer do this calculation. And she told me “well, you tell the computer what you want to do.” She must have seen my completely stunned face, and then she said something that has changed my life.
Imagine that you’re a computer and you have these eight numbers. You have a piece of paper and pen. How do you compute that intersection point?
As these words tried to find their way through the make-believe model of how computers work, I struggled to not be disappointed. Here I was, so sure that computers can do any kind of computation, and here was the teacher telling me that I’m supposed to imagine myself in its place. Isn’t that why we have computers, to do those sorts of things for us, with no errors and much more quickly?
And then I had to take a piece of paper and do the math. Myself. Taking four numbers, treating them as X and Y coordinates, and computing the numbers that define a line that passes through these two points. Then solving a two-variable equation group that defines the intersection of two lines. Making sure that I got all my indexes and signs right. And then typing the long formula into the computer, making sure that I got all my indexes and signs right. All the time refusing to believe that I actually had to do it. Me. Not the computer.
When I was done, I raised my hand again and the teacher came over. She had a piece of paper in her hand, and she typed the first eight numbers. The result was correct. Then she typed the next eight numbers. And my program crashed. She copied those eight numbers to my piece of paper and stepped away. And that was my first debugging session. On a piece of paper. Because after typing the same numbers into my program it crashed again. And again. And that’s not how it was supposed to happen. Because, well, in my make-believe world of how computers work, they didn’t crash.
As I put those eight numbers into my scribbled formula, it came out that the denominator was zero. How could that happen, if my program printed the right result for the first input? Frustrated, I drew a two-dimensional plane with X and Y axes, put the four points and connected the line ends. Of course. The lines were parallel. A corner case, if you will. My first corner case. Back to the program. I now extracted the computation of the denominator into a separate line and tested for zero. If it was zero, the program would print a special message and stop. I was ready to call the teacher back. But then a thought crept up. She caught my program with a special case. Is there another one? And it turned out, there was, and her next group of input numbers would test just that. The case where one of the lines was parallel to one of the axes. In that case, my denominator was still zero, but there was another way to solve the equation group.
And then I was done. And I played that computer game for the rest of the session. I don’t remember how the other kids did. Some of them sat there until the end and didn’t know what to do, and some of them were playing the same game by the end of the session. I was so completely engrossed by the game that I’ve hardly noticed anything around me. But that was the day that has changed my life. The day that has showed me that computers have to be told what to do. They can do any kind of computation, much faster than any human being. But there is no magic button to press to make the computer program itself.
Later, I would devour A. K. Dewdney‘s monthly columns in “Scientific American”, from Mandelbrot set to Game of Life and much more. Later, my parents would buy me one of the first Soviet micro-computers. Later, my brother-in-law would lend me his Commodore and I would program both computers in Basic, even if there was no way to save what I wrote. And later, my school would have a special program that would have us spend one day every week in an external computer facility, tackling problems of various size with only one problem defeating each and every one of us. But that is a story for the next time.
I wish I remembered the name of that teacher. The one that opened my eyes and guided me without force-feeding me the answer. You have my undying gratitude. Thank you from the bottom of my programmer’s heart.