December 26th, 2013

In the world of unlikely NFL playoff scenarios

Random musings as NFL gets into the last week of the regular season and both Saints and Cardinals are at 10-5, with only one of them going to the playoffs. Since they’re not playing each other, we may see an 11-5 team not going to the playoffs this year. And only three years ago Seahawks won their division and went to playoffs at 7-9.

There’s some math laid out in here [PDF] that looks at the schedule balance of an NFL season, and while on average it appears that in most cases we do see the best teams from each conference going to the playoffs, I always wondered what is the worst case scenario.

There are two extremes here. The first one is how bad can you be and still get to the playoffs. The second one is how good can you be and still watch the post season on the TV.

The first one is simple. Each conference has four divisions, and every division is guaranteed a spot in the playoffs (aka Seahawks ’10). If you’re not familiar with how the regular season schedule is determined, here’s how it works. For the purposes of this first extreme, it’s enough to know that each team plays the three teams in its division twice (home / road), and then 10 games elsewhere in the league. What’s the absolute worst? Well, you can get into the playoffs with zero wins. That’s right, zero. How? Imagine a division with four really bad teams, and every game in that division ending at 0:0 (or any tie). And then every team in that division loses the rest of their 10 non-division games. In that case you’d eventually get to a very awkward coin-toss to determine which one of these four teams gets the “first place” in the division. Unlikely? Extremely. Possible? Of course.

Now to the other extreme. How many games can you win and still miss the playoffs? The answer is 14 (out of 16 games you’re playing) – if my math is correct of course. Let’s look at the scheduling rules more closely.

When the league expanded to 32 teams, it brought a very nice balance to the divisions themselves and to the schedule. Two conferences, four divisions each, four teams each. All hail the powers of 2! By the way, there’s additional symmetry that you get to play / host / visit each other team every 3/4/6/8 years (depending on the division association).

Back to who gets to the playoffs. Every division sends its first place, with two more spots (wildcards) left to the two best teams in the conference after division leaders are “removed” from the equation. This means you can be a really good team and still not get into those six. The conditions are quite simple really (as one of Saints / Cardinals will see this Sunday). The first one is that you have an even better team in your division that takes first place. The second one is that you have two better teams elsewhere in the conference (such as 49ers that already secured the first wildcard spot for NFC).

Let’s look at the numbers now. How can we get to the 14-2 record and still miss the playoffs?

In the following scenario we have NFC East as NE, NFC West as NW, NFC North as NN, NFC South as NS, and their counterparts in AFC as AE, AW, AS and AN. Let’s choose three random divisions in NFC, say NE, NN and NS.

A team in NE is playing 6 games in NE, 4 in NN, 4 in AW and 2 in NS/NW. A team in NN is playing 6 games in NN, 4 in NE, 4 in AN and 2 in NS/NW. A team in NS is playing 6 games in NS, 4 games in NW, 4 games in AE and 2 games in NN/NE.

In general you meet all teams in your division twice, all teams in another division in your conference once, all teams in a division in the other conference once, and then two teams from the other two divisions in your conference that finished at the same place as you last year.

What we’re trying to do is to get as many wins as possible for the #2 team in each one of our divisions (NE, NN and NS). There are only two wildcards available in each conference, and we don’t care what happens in NW or the entire AFC.

For each pair of teams in NE, NN and NS we want to maximize the number of wins while still keeping in mind that they play each other. This year each team from NE plays each team in NN twice. And each team in NS plays one team in NN and one team in NE – based on its position in the division last year.

Let’s look at NS first. Team #1 and #2 get four wins each playing #3 and #4 in their division. Then they split the wins in their two games, getting at 5-1 record for each. They then win all 4 games against NW, getting at 9-1, and all 4 games against AE, getting at 13-1. Finally, assuming that our two teams finished last year at positions that get them scheduled against NE / NN teams that will not be finishing at #1 / #2 teams this year, both teams get at 15-1 – all without taking a single win away from the four teams in NE / NN that we’ll be looking at shortly.

Now to NE / NN. We’ll look at NE, while the same logic applies to NN. Once again, teams #1 / #2 win all four games against #3 / #4 and split their own two matches, getting at 5-1 both. Now they play four games against NN. They win both games against #3 / #4 teams, getting at 7-1 each, and split the wins against #1 / #2. We need to split the wins in order not to take away “too many” wins from those two teams in NN. So we end up with 8-2. Now they win all four games against AW, getting at 12-2. Then they get one win against NW getting to 13-2. Finally, they have one game to play against NS. Applying the same selection logic, the best scenario for us is to get them scheduled against a team that is not at #1 / #2 this year (but at the same position they were last year), which gets both teams to 14-2.

And now the same goes to the first two teams in NN, getting them to 14-2 both. Which is why we need to split the NE/NN games between #1/#2 teams.

Now we have #2 team in NS at 15-1 and #2 teams in both NE and NN at 14-2 each. One of them will have to stay out of playoffs. Unlikely? Extremely. Possible? Of course.

Waving hands in the air, it feels that the first scenario is much less likely to happen due to how few ties we usually see in the league. Even though it can happen in any one of the eight divisions, and the second scenario involves three divisions in the same conference, it’s still much less likely to happen. What if we remove the ties from the equation?

A 4-team division has every team playing every other team twice. All in all you have 12 inner-division games in every division. If no game ends in a tie, the most extreme case is that all teams end up winning and losing 3 games in their division, and losing all other 10 games, with 3-13 record for each. One of them will go to the playoffs. That would also answer the question of how many games can you lose and still go to the playoffs. In the previous scenario (no wins), you have every team in the division at 0-10-6 record, so it’s “only” 10 losses. With this scenario you have a 13-loss team going to the playoffs.

It would appear that this new extreme is more likely to happen, as it only involves teams in a single division, and the other one (14-2 not going to playoffs) involves teams in three divisions.

Now two questions remain. Can we get to a 15-1 record and stay out of playoffs? And, more importantly, is there a fatal flaw in the logic outlined above?

December 26th, 2013

Metric-driven design

One of my recent resolutions (not for 2014, but for mobile software in general in the last few months) was to evaluate designs of new apps and redesigns of existing apps from the position of trust. Trust in the designers and developers of those apps that they have good reasons to do what they do, even if it’s only one or two steps on the longer road towards their intended interaction model. But Twitter’s recent redesign of their main stream keeps on baffling me.Putting apart the (somewhat business-driven from what I gather) decision of “hiding” the mentions and DMs behind the action bar icons and adding the rather useless discover / activity tabs, I’m looking at the interaction model in the main (home) stream.

Twitter never got to the point of syncing the stream position across multiple devices signed into the same account. There is at least one third-party solution to do that, which requires third-party apps to use their SDK and users to use those apps. The developer of that third-party solution has repeatedly stated that in his opinion Twitter is not interested in discouraging casual users that check their streams every so often. If you start following random (mostly PR-maintained) celebrity streams, it’s easy to get lost in the Twitter sea, and when you check in every once in a while and see hundreds – if not thousands – of unread tweets, you might start feeling that you’re not keeping up.

As I’ve reached the aforementioned decision a few months ago, I’ve uninstalled all third-party Twitter apps I had on my phone, and switched to the official app. It does a rather decent job of remembering the stream position, as long as – from what I could see – I check the stream at least once every 24 hours. When I skip a day, the stream jumps to the top. It also seems to do that if the app refreshes the stream after I rotate the device, so some of this skipping can be attributed to bugs. But in general if I’m checking in twice a day and am careful not to rotate the device, the app remembers the last position as it loads the tweets above it.

In the last release Twitter repositioned the chrome around the main stream, adding discover / activity tabs above it and the “what’s happening” box below. While they encourage you to explore things beyond your main stream, it also looks like they’re aware that these two elements take valuable vertical space during the scrolling. And the current solution is to hide these two bars when you scroll down the stream.

And here’s what baffles me the most. On one hand, the app remembers the stream position, which means that I need to move the content down to get to the newer tweets (as an aside, with “natural” scrolling I’m not even sure if this is scrolling up or scrolling down”). On the other hand, the app hides the top tabs / bottom row when I move the content up.

Is the main interaction mode with this stream is getting to the last unread tweet and then going up the stream to skim the unread tweets, as hinted by remembering the stream position? Or is it getting bumped to the top of the stream and scrolling a few tweets down just to sample the latest few entries in it, as hinted by hiding the two chrome rows and providing more space during the scrolling?

I don’t want to say what the app should or shouldn’t do in the stream (as pointed out by M.Saleh Esmaeili). It’s just that I can’t get what the designers intend the experience to be.

A few days ago The Verge has posted an article around metric-driven design in Twitter mobile apps, an for me personally the saddest part of this article is how much they focus on engagement metrics and how little the guy talks about informed design. Trying to squeeze every possible iota of “interaction” out of every single element on the screen – on its own – without talking about the bigger picture of what Twitter wants to be as a designed system. Experiments are fine, of course. But jacking up random numbers on your “engagement” spreadsheets and having those dictate your roadmap (if one can even exist in such a world) is kind of sad.

When every interaction is judged by how much it maximizes whatever particular internal metric your team is tracking, you may find yourself dead-set on locating the local maximum of an increasingly fractured experience, with no coherent high-level design, and no clear path that you’re taking to arrive to the next level. Or, as Billie Kent’s character in Boardwalk Empire says, “always on the move, going nowhere fast.”

October 4th, 2013

An unfair question

As part of the “In Motion” series, I did a few interviews about screen graphics and the way they are portrayed in futuristic sci-fi movies, and one of the “usual” questions I ask the person is where they see the human-computer interaction going in the next few decades.

And then, as I was talking with Scott Chambliss, the production designer of “Star Trek”, about how he approached designing the computer environment of the Enterprise Bridge, especially given that it’s happening in a rather distant future (250 years from now, give or take), I realized that I’m not really being fair.

Asking such a question immediately puts the other person on the defense. Look at where we were 25 years ago, and look at where we are now. The pace of technological evolution is incredible, and there’s an amazing amount of research going into all these different directions, some proving to be niche experimentation, and some reaching and changing lives of hundreds of millions of people. Asking somebody (who is not an extrovert futurist) to predict what will happen in the next 25 years is unfair. There’s just no way to be able to do that, and there’s an extra layer of being indexed forever and having people point fingers at your old self and how completely wrong you were at the time you made that prediction.

So here’s my resolution. I’m not going to ask this question any more. No more “Where do you see human-computer interaction going in the next 25 years”. Instead, I’m going to ask about where they would like it to go. What is bothering them now, and how that can be eliminated? How this can make our lives better? How this can be enriched without isolating us even more from our fellow human beings?

My own personal take on this is that interacting with computers is too damn hard. Even given that I write software for a living. Computers are just too unforgiving. Too unforgiving when they are given “improperly” formatted input. And way too unforgiving when they are given properly formatted input which leads to an unintentionally destructive output. The way I’d like to see that change is to have it be more “human” on both sides. Understand both the imperfections of the way human beings form their thoughts and intent, and the potential consequences of that intent.

What about you? Where would you take the field of HCI in the next 25 years?

June 6th, 2013

Skeuomorphic. One louder. To eleven.

In my previous lifetime I was a Swing developer. And I liked shiny things. As a proof, here’s the pinnacle (or so I thought, at least) of my explorations in making shiny glossy glitzy buttons. That was around April 2006.

Different UI toolkits provide different capabilities that allow you controlling visual and behavioral aspects. Putting the technical details of styling aside though, UI control styling usually works at the level of an individual control.

And so as I was working on my own look-and-feel library, I heard more and more tidbits about Vista. It was released in January 2007, but it had a long [really really long] history. People kept talking about the three “pillars”, and I was mainly interested in the Presentation one. I don’t have a link, and I can’t even tell if it was a feature that was eventually shelved or just a rumor. But when I heard it, it made a long-lasting impression on me.

The gist of it was that entire UI is a 3D model. You know how they say that buttons should look like something that can be pressed. So you have some kind of z-axis separation. Drop shadows, bevels, some kind of a gradient that hints at the convex surface. And don’t forget to throw in the global lighting model. And so that bit of pixel feature rumor said that the entire UI – from the window level down to an individual control – would be an actual 3D model, with each object living in its own z plane.

So instead of styling each control to create an illusion of z separation (with whatever 2D images are backing each individual control), you would have a spatial model. Each control has its own 3D geometry. Now all you need to do is place the controls in the 3D space, create a few global lights, create a bunch of textures to use on the controls and voila – ship it over to the GPU to compute the final pixels. Want to restyle the UI? Supply a different texture pack and a different lighting model. All the rest is taken care of by the system. Have your own custom control library? Define the 3D meshes for them. All the rest is taken care of by the system.

Now imagine what you can do. If you place two buttons side by side, with just the right tweaking of the meshes and just the right amount of reflection on the textures you can have a button reflecting parts of other buttons around it. And the other way around. You know, all those shiny reflection balls from the early ray tracing demos.

Or, if you model the mouse cursor as an object moving above the window, you can have the back of it reflecting in those controls that it’s passing over. If your control mesh has some kind of a curved contour, the cursor shape would get distorted accordingly as it glides off of the edges.

Or, as you press the button, the press distorts the button mesh as the exact spot of the press, and the entire geometry of the scene reflects that.

I had serious thoughts of doing that. In Swing. That never happened though. Here’s why.

In my mind, there were three big parts to actually doing something like that.

The first one was relatively simple. It would involve transitioning from the point of view of looking at a single control at any point in time towards creating a global view scene that had the entire view hierarchy. There were enough hooks in the API surface to track all the relevant changes to the UI, and even without that you can always say that applications must opt into this mode and have to call some kind of an API that you provide that there are “ready” for you to build that graph.

The second one was also relatively simple. I would need to generate the meshes for all controls. Some are simple (buttons, progress bars), some might be trickier (check marks, sliders). But nothing too challenging. Mostly busy work.

But the last one was the effective non-start. How to actually create the final render of the entire window with acceptable performance? Doing my own 3D engine was kind of out of question. I knew just enough of what is involved to not even begin down that path. So that left me with OpenGL.

JOGL was around at the time, and had a nice momentum behind it. They were gearing towards providing bindings for OpenGL 2.0. There was a lot of activity on the mailing lists. Java3D was another alternative that was under similarly active development. There was even a talk of merging the two. And so I started looking into a simple proof of concept of making a simple JOGL demo on my trusty Windows box.

Around that time (early 2007) Ben Galbraith announced the first (and, posthumously, the only) Desktop Matters conference in downtown San Jose. I left a comment on that announcement. He asked me whether I wanted to make a short presentation on one of my projects. I was quite happy to do so. That was my first public presentation [thanks for the encouragement, by the way!]

It was a nice gathering. Around 100 people, I’d say. And they had quite a few people from the desktop client team at Sun available for informal Q&A. Chris Campbell was my hero at the time (no offense, Chet). The dude was slinging code left and right, showing a lot of great things that could be done with Java2D. He was also working on hardware acceleration of a lot of those APIs. If I remember correctly, he was talking a lot about doing various acceleration on top of OpenGL and Direct3D. Who would be better to validate the overall approach of doing this thing that I wanted to do than him.

I managed to grab him for a few moments. I outlined my thinking. He was polite. He said that it sounded about right. That was just enough encouragement for me.

So after the conference was over I got to actual work. My first private demo was to render a colored sphere. And it looked horrible. It had jagged edges all around it. And it also had visible seams running all over the sphere. I could see the tessellation model before my eyes. It was quite bad.

So I fired off an email to the mailing list. Not about my grand vision. But rather about this specific thing. How to make a sphere look like a sphere. With no jaggies and no tessellation. And they told me to get a “real” graphics card, because whatever integrated graphics card I had on the motherboard is no good for any kind of OpenGL work. And that’s where I stopped.

What’s the point of even thinking going down that road if you must have an expensive graphics card? It might be OK for a demo. It might be OK if I’m satisfying my own itch and showing off my skills with some kind of a thing that runs well on my machine [TM]. But if it can’t be used on “everyday” computers that don’t have those fancy hardware components, it’s a no-go for me.

You might say that I chickened out. I had this grand vision, and folded at the first sign of trouble. But that was – and still remains – my main issue with anything that ends with “GL”. Its never “quite there” promise of commodity hardware availability that is “just around the corner” – and in the meantime, you need this very particular combination of hardware components, drivers and other related software to run. And oh, even if you do have a beefy graphics card, unfortunately it has this driver bug that crashes the entire thing, so you might want to either bug the vendor to fix it, or just disable the whole thing altogether.

Things might have been different. I had really a lot of spare time back then. I might have went down the road of biting the bullet and buying that graphics card (although, as mentioned above, it was not about my own cost, but rather about the reach of the final library). I might have had this thing done in some form or another. Can you imagine buttons reflecting other buttons reflecting the mouse cursor passing above them and rippling as you press them? With the ripple reflected all around that button, and being reflected back in it?

So that never happened. And now it’s all about flat. Flat this. Flat that. Flat *ALL* the things!