DragonCon 2016 parade highlights

September 19th, 2016

As usual, DragonCon comes to Atlanta over the Labor Day weekend. These are my personal highlights from the opening parade this year.

Continue reading »

Around 2005 I was really into the whole field of non-photorealistic rendering (NPR). I’ve pored over dozens of research papers and spent months on implementing some of the basic building blocks and combining them together to recreate the results from a couple of those papers. It even got as far as submitting a talk proposal to SIGGRAPH of that year. I thought I had an interesting approach. All of the reviewers strongly disagreed.

Edge detection was one of the building blocks that I kept on thinking about long after that rejection. A bit later I started working on a completely different project – exploring genetic algorithms. The idea there is that instead of coming up with an algorithm that correctly solves a problem, you randomly mutate parts of the algorithm generation that you currently have and evaluate the performance of each mutation. The hope is that eventually these random modifications “find” a path towards the optimal solution that works for your input space. You might not understand exactly what’s going on, but as long as it gives you the right answers, that part might not be that important.

My homegrown implementation was to come up a set of computation primitives – basic arithmetic operations, a conditional and a loop – and let it work on “solving” rather simple equations. It was rather slow as I was basically doing my custom completely unoptimized VM on top of Java’s own VM. As I started spending less time working on it and more time just lazily thinking about it, I kept on bouncing two things in my head. One was to switch my genetic engine to work at the level of JVM bytecode operations. Instead of having a double-decker of VMs, the genetic modifications and recombinations would be done at the bytecode level, and then fed directly to the JVM. The second idea was to switch to doing something a bit more interesting – putting that genetic engine to work on image edge detection.

I’ve spent months thinking about various aspects of what could be done with such an engine, and how novel the entire thing would be when it’s all done. But I never got around to actually doing it.

Around 2007 as I was in the middle of working on a bunch of libraries for Swing (a few extra components, an animation module and a look-and-feel library), I fell in love with the idea outlined by some of the presentations around Windows Vista. I wrote about that in more detail a few years ago, but the core of it is rather simple – instead of drawing each UI widget as its own thing, you create a 3D mesh model of the entire UI, throw in a few lights and then hand it over to the rendering engine to draw the whole window.

If you have two buttons side by side, with just enough mesh detail and reflection on texture you can have buttons reflecting each other. You can mirror and distort the mouse cursor as it moves over the widget plane. As the button is clicked, you distort the mesh at the exact spot of the click and then bounce it back. Lollipop ripples, anyone?

I’ve spent months thinking about various aspects of what could be done with such an engine, and how novel the entire thing would be when it’s all done. But I never got around to actually doing it.

Around 2010 as I wound up all my Swing projects, I decided that it would be a good experience for me to dip my toes into the world of Javascript. So I took the Trident animation library that was written in Java (with hooks into Swing and SWT) and ported it to Javascript. It was actually my most-starred project on Github after it hit a couple of minor blogs.

I don’t know how things in the JS land are looking now, but back in 2010 I wanted a bit more from the language, especially around partitioning functionality into classes. Prototype-based inheritance was there, but it was quite inadequate for what I wanted. It probably was my fault, as I kept on going against the grain of the language. As the initial excitement started wearing down, I considered where I wanted to take those efforts. In my head I kept on going back to the demos I did for Trident JS, and particularly the Canvas object that was at the center of all of them.

Back in the Swing days my two main projects were the look-and-feel library (Substance) and a suite of UI components that had the Office ribbon and all the supporting infrastructure around them (Flamingo). So as I started spending less time working on the code and more time just lazily thinking about it, I thought about writing a UI toolkit that would combine everything that I’ve worked on in Swing and bring it to Javascript. It would have all the basic UI widgets – buttons, checkboxes, sliders, etc. It would all be skinnable, porting over the code that I already had in place in Substance. Everything would have animations from the port of Trident. I was already familiar with the complexity of custom event handling (keyboard / mouse) for the ribbon component in Flamingo. And it would all be implemented at the level of the global Canvas object that would host the entire UI.

I’ve spent months thinking about various aspects of what could be done with such an engine, and how novel the entire thing would be when it’s all done. But I never got around to actually doing it. Flipboard did that five years later. The web community wasn’t very pleased about it.

Some say that ideas are cheap. I wouldn’t go as far as that, even though I might have said it a few times in the past. I’d say that there are certainly brilliant ideas, and the people behind them deserve the full credit. But only when those ideas are put into reality. Only when you put in the time and the effort into making those ideas actually happen. Don’t tell me what you’re thinking about. Show me what you did with it. For now I’m zero for three on my grand ones.

Surveillance

February 17th, 2016

Gotta hand it to the FBI. Take their proposal of a completely custom system build that would circumvent various protections that are designed to keep people away from your information, and consider it on purely technical matters. It’s simple to explain to technical people, and it’s simple to explain to people that are not that well-versed in technology.

But consider this. There is no such thing as a safe-guarded backdoor. Do you really believe that once one government agency gets their hands on such a system build, it will only be used to help the “good guys”? If you are, I’d love to live in the fantasy world you’re inhabiting.

In the real world that the rest of us live in, this will be nothing short of a disaster. This custom build will get shared between various government agencies and branches, with an absolute guarantee of two things happening. First, it will get to an agency that is solely overseen by secret courts that rubberstamp pretty much every request. Second, it will get into the hands of general public, sooner rather than later – through social-engineered hacking or another Snowden-like act of political activism.

And then there’s another absolute guarantee. Let’s for a minute say that if you’re a law-abiding US citizen, then the US government is good guys. Then there are other governments who are our allies, which makes them good guys by proxy, and there are other governments which are our enemies which makes them bad guys.

What is going to stop other governments from demanding access to the same special system build? How many countries can a multi-national corporation withdraw their business from before it has no more places to do business in? How do you as a supporter of lawful information “extraction” decide on which laws you agree with and which step over “the line” that separates the good guys from the bad guys?

There’s not a single line in Tim Cook’s letter that is a gratuitous exaggeration of the dangers that lie ahead. I’ve spent the first twenty years of my life living in the communist USSR, where it was pretty safe to assume that the state had the capabilities and the means to do mass surveillance of anybody and everybody.

How does the self-aggrandizing beacon of democracy turn into the omnipresent surveillance state? Two ways. Gradually, then suddenly (in the mighty words of Ernest Hemingway). Just don’t tell your kids that you didn’t see it coming.

Tech reporting

September 28th, 2015

Two types of tech reporting / analysis out there.

The intellectually honest, hard working one where you look at publicly announced products (and maybe vaguely hinted future plans) and invest effort to make intelligent, thoughtful, researched predictions of how the different players in different interrelated markets will evolve in the next couple of years.

And then there’s “building your reputation” on top of some anonymous source inside the company that feeds you the actual plans and blueprints. The lazy kind where you’re just the conduit of plans and upcoming product announcements, always at the mercy of that source being continuously employed in that position of “close knowledge”. The kind that is disguised as “original content”, and is anything but.

Guess which one is more respectable even if you might make a mistake or two down the road. Play the long game.