There’s a new article that has been published today on java.net titled Debugging Swing. It builds on the previous work by Scott Delap and Alex Potochkin to provide even more tools for tracing the EDT violations that can lead to visual artifacts, unresponsive or frozen UIs and infinite painting cycles.
If you’re interested in more articles on Swing, please post your suggestions in the comments. Specifically, i’m looking for the “everyday” problems that you’re facing in the regular business Swing applications. Note that an article scope is too small to talk about architecturing a Swing application, so try to keep it to a specific topic, such as auto-completion support on comboboxes or writing a custom component.
Thanks to Eamonn McManus for his help on JMX-related code and to Chris Adamson for the editing.
It’s been about three months since Sun has announced JavaFX “family of products” at JavaOne. Based on the original work by Chris Oliver, it has been picked up by the powers-to-decide, fed into the relentless PR machine and since then touted as the next big thing on the desktop. It certainly has the technical potential with all the engineers working on it (more on that later). And the aspirations are most surely lofty, positioning JavaFX against Flash, Flex, Apollo and SilverLight. My main concern is with the name.
Why does it have to have “Java” in it? The end customer doesn’t care how the application is written. He cares that it’s easy to install. He cares that it starts fast. He cares that it runs fast. He cares that it doesn’t crash on him. He cares that it doesn’t lose his data. He cares that it does what it is supposed to be doing.
Let’s look at the competition. Does anybody outside the development teams of the respective products know what language are Flex, Apollo and SilverLight written? I guess some mix of C with other languages. Do i care when i see a nice Flex / Flash site? Of course not. Do you ever hear Adobe talking about “bringing the full power of language X to the desktop”? What do users know about the full power of this language? Or, to be more precise, what do they care? As long as it’s a one-click user-friendly installer, immediate startup time, and no crashing, they don’t care at all.
Another rule that JavaFX is blatantly violating is the “underpromise and overdeliver”. No designer-friendly content authoring tools, buggy IDE plugins, excruciatingly slow runtime and constantly changing language definition. Of course, these are all coming (or at least promised to come), but promises are just words. While the competition is smart enough to talk about features after they are implemented, Sun’s marketing machine is effectively ruining any chance that JavaFX had to compete by selling promises.
The developers are, of course, eager to download the bits and play with them. Quick frustration and “i hope it would be better” ensue. Does that remind you of anything? Swing still carries the burden of perceived as slow, buggy and odd-looking, even after all the effort that went into it in Tiger and Mustang. NetBeans has long ago lost the “war” to Eclipse and the attempts of catching up in the latest 6.0 version are not going to change that. If Sun wanted JavaFX to follow the same perception patterns, it most surely succeeded.
What can be done? First of all, a major shakeup in the PR / marketing department. Talk about things that are done, not about the things that you’re going to do. The later might work when you’re working on a product that doesn’t have any competition (brand new market place), but doing this in a saturated market with very experienced players will quickly backfire and you will carry the burden of bad reputation for a very long time. Second, don’t focus on the technology behind the product. Rebrand it and lose the word “Java”. And while you’re at it, lose the “FX” as well. Third, stabilize the language, squeeze every bit of performance out of it and create a comprehensive suite of tools. All this before you make any public announcement (look at Apple if you need to). When creating the tools, have graphic designers on your team. Learn from Microsoft that had a professional designer as a part of Blend team. If you want to compete against Adobe and Microsoft, do not let the developers design content authoring tools.
And by the way, while we’re at it, don’t rename your stock ticker as well.
A few days ago i was adding some extra logic to one of the modules that i’m maintaining. Unlike the UI-related modules, this one has quite a few test cases that test various rules (about 110-120). And so, as i was adding some code, i caught myself thinking this – instead of proofing the conditions in my head (or on paper) before writing the code, let me just go ahead and write something that looks like it’s a correct thing to do and wait for the test cases to tell me if the code is correct or not. This is wrong on so many levels, and thankfully the test cases that went through this path failed. But this just underscores the imaginary safety in the test-driven development.
Going back about 20 years, the programming practices were much more robust (this doesn’t necessarily mean that there were fewer errors in the resulting code). In most development environments, you had one big mainframe computer and you had your “slot” every few hours to run the latest version of your code. And so, you had to be pretty sure that you ironed out all the obvious bugs before your slot came up, because the next time to check the fixes would only be in three or four hours. The end result is that even with the “harmful” goto statements, spaghetti code and lack of formally defined design patterns, the developers just sat in front of their hand-written (or printed) code and traced all possible execution steps before sending the code to the computer.
Since then, the hardware has become so cheap and powerful that the present generation doesn’t even think about “save-compile-run” cycle anymore. With incremental compilation in Eclipse, you don’t even notice that the code is being compiled (unless you touch code that affects a lot of other places). And so, you might find yourself rushing to code before properly thinking about the design and all the flows. This is especially painful with the test-driven development and agile practices that encourage this style, that i call lazy programming.
I previously referred to this as the imaginary safety in the test-driven development. As long as all the tests pass, the software is working as it should be. Don’t worry about the dynamic typing and the problems that could only be found at runtime – if you have good test coverage, you’ll never get these problems. Which brings me to the question – what is a good test coverage?
Of course, we have these nice tools such as Clover and Emma that produce visually appealing coverage reports. Once you get 100% of the lines covered by your unit / integration / … tests, you’re done, right? Not so fast, and this brings me back to the topic that i studied for quite some time during my last two years in the university – formal verification.
This is quite an interesting and challenging field – given the definition of a problem and a solution, decide whether this solution really solves the problem. This works really nice on hardware (especially VLSI), and is in fact an indispensable tool for verifying the hardware chips (in fact, the FDIV bug is pretty much the only significant commodity hardware bug i heard of in the last ten years). While some of the techniques work on finite-state automata, others have been extended to handle parametrized and infinite domains. However, this still doesn’t scale well from hardware to software.
Unless we’re talking about primitive programs, the problem domain is infinite. And this is especially magnified nowadays with the shift to multi-core systems and distributed faulty environments. Just having 100% line coverage doesn’t mean anything. Not only that, but for more complicated systems you might have a hard time coming up with the correct test cases (expected results); while this is true for the traditional upfront design, it is even more so for the agile “refactor as you go while we can live without explicit business behavior” techniques). “All my tests pass” means exactly that – “all your tests pass”. Nothing less and nothing more. You can cover only so much of the infinite domain with a finite set of test cases.
Not all is lost, of course. Don’t blindly rely on unit tests and code coverage. Think about the code before you start pounding on the keyboard (and hopefully, before you start pounding out those test case skeletons). An interesting approach has been explored in the last few years which tries to address the “state explosion” of real-world programs (applied successfully to the Firefox Javascript engine to find 280 bugs). This, of course, places even more burden on the test layer; since the test cases are randomly generated, it needs to provide a way to save failed test cases and rerun them later for a reproducible scenario.