But I never got around to actually doing it

June 14th, 2016

Around 2005 I was really into the whole field of non-photorealistic rendering (NPR). I’ve pored over dozens of research papers and spent months on implementing some of the basic building blocks and combining them together to recreate the results from a couple of those papers. It even got as far as submitting a talk proposal to SIGGRAPH of that year. I thought I had an interesting approach. All of the reviewers strongly disagreed.

Edge detection was one of the building blocks that I kept on thinking about long after that rejection. A bit later I started working on a completely different project – exploring genetic algorithms. The idea there is that instead of coming up with an algorithm that correctly solves a problem, you randomly mutate parts of the algorithm generation that you currently have and evaluate the performance of each mutation. The hope is that eventually these random modifications “find” a path towards the optimal solution that works for your input space. You might not understand exactly what’s going on, but as long as it gives you the right answers, that part might not be that important.

My homegrown implementation was to come up a set of computation primitives – basic arithmetic operations, a conditional and a loop – and let it work on “solving” rather simple equations. It was rather slow as I was basically doing my custom completely unoptimized VM on top of Java’s own VM. As I started spending less time working on it and more time just lazily thinking about it, I kept on bouncing two things in my head. One was to switch my genetic engine to work at the level of JVM bytecode operations. Instead of having a double-decker of VMs, the genetic modifications and recombinations would be done at the bytecode level, and then fed directly to the JVM. The second idea was to switch to doing something a bit more interesting – putting that genetic engine to work on image edge detection.

I’ve spent months thinking about various aspects of what could be done with such an engine, and how novel the entire thing would be when it’s all done. But I never got around to actually doing it.

Around 2007 as I was in the middle of working on a bunch of libraries for Swing (a few extra components, an animation module and a look-and-feel library), I fell in love with the idea outlined by some of the presentations around Windows Vista. I wrote about that in more detail a few years ago, but the core of it is rather simple – instead of drawing each UI widget as its own thing, you create a 3D mesh model of the entire UI, throw in a few lights and then hand it over to the rendering engine to draw the whole window.

If you have two buttons side by side, with just enough mesh detail and reflection on texture you can have buttons reflecting each other. You can mirror and distort the mouse cursor as it moves over the widget plane. As the button is clicked, you distort the mesh at the exact spot of the click and then bounce it back. Lollipop ripples, anyone?

I’ve spent months thinking about various aspects of what could be done with such an engine, and how novel the entire thing would be when it’s all done. But I never got around to actually doing it.

Around 2010 as I wound up all my Swing projects, I decided that it would be a good experience for me to dip my toes into the world of Javascript. So I took the Trident animation library that was written in Java (with hooks into Swing and SWT) and ported it to Javascript. It was actually my most-starred project on Github after it hit a couple of minor blogs.

I don’t know how things in the JS land are looking now, but back in 2010 I wanted a bit more from the language, especially around partitioning functionality into classes. Prototype-based inheritance was there, but it was quite inadequate for what I wanted. It probably was my fault, as I kept on going against the grain of the language. As the initial excitement started wearing down, I considered where I wanted to take those efforts. In my head I kept on going back to the demos I did for Trident JS, and particularly the Canvas object that was at the center of all of them.

Back in the Swing days my two main projects were the look-and-feel library (Substance) and a suite of UI components that had the Office ribbon and all the supporting infrastructure around them (Flamingo). So as I started spending less time working on the code and more time just lazily thinking about it, I thought about writing a UI toolkit that would combine everything that I’ve worked on in Swing and bring it to Javascript. It would have all the basic UI widgets – buttons, checkboxes, sliders, etc. It would all be skinnable, porting over the code that I already had in place in Substance. Everything would have animations from the port of Trident. I was already familiar with the complexity of custom event handling (keyboard / mouse) for the ribbon component in Flamingo. And it would all be implemented at the level of the global Canvas object that would host the entire UI.

I’ve spent months thinking about various aspects of what could be done with such an engine, and how novel the entire thing would be when it’s all done. But I never got around to actually doing it. Flipboard did that five years later. The web community wasn’t very pleased about it.

Some say that ideas are cheap. I wouldn’t go as far as that, even though I might have said it a few times in the past. I’d say that there are certainly brilliant ideas, and the people behind them deserve the full credit. But only when those ideas are put into reality. Only when you put in the time and the effort into making those ideas actually happen. Don’t tell me what you’re thinking about. Show me what you did with it. For now I’m zero for three on my grand ones.