After talking with Alex Imrie about usability, it’s time to ask her a few questions about one of the tools her company is working on – GUIDancer. Following a similar interview with Alex Ruiz, creator of FEST, this interview delves deeper into the subject of testing desktop and web UIs.
alexandra_imrie_thumbTell us a little bit about yourself

I’m Alex Imrie, and I work at BREDEX GmbH in Braunschweig, Germany. I have various roles at the company including Marketing, customer demonstrations, training and support as well as documentation and conceptual design for some of our software. I also do some testing with our automated test tool, GUIdancer.

What is GUIdancer?

GUIdancer is a testing tool for automating functional tests through the GUI. In essence, tests that are usually performed manually can be automated with GUIdancer. We currently support applications with Java (Swing, SWT/RCP) and HTML user interfaces. GUIdancer is a black-box tool and differs from other similar tools in that it uses the keyword-driven approach to testing. Keyword-driven testing is a method which is very close to the principles of software development without actually requiring that code be written. Because GUI tests consist of the same repeated actions, there is a focus on reusability. Tests can be created from a running application or parallel to software development, independently of an application under test, from a library of actions by drag & drop. Each module is named according to the actions it executes, and can be reused (referenced) throughout the test. This reusability means that tests grow quickly and are easy to maintain because central changes update all the instances where a module was reused.

Why did you decide to focus mainly on testing the Java UIs?

Since Bredex was founded in 1987, most of our projects have involved user interfaces, so there has been a focus on GUI testing since the very early days. From 1995, we specialised in Java, so when we decided to write our own test tool, the choice was obvious which technology we were going to start with. The irony was, the first toolkit we supported was Swing, and GUIdancer itself is written in RCP. Since the release of version 2.0 we’ve been able to test GUIdancer with GUIdancer, and we also added the support for HTML testing. The architecture of GUIdancer means that any interface can be tested; it’s just a question of seeing which direction we plan to go in next.

UI testing doesn’t seem to receive its share of the “limelight”, even in the currently popular test driven development paradigm. Is it too difficult or is it just seen as less important?

I think that there is certainly the perception that it is too difficult. A lot of people have been burned by failed attempts or have started with the wrong expectations of functional test automation. UI testing, even in the test driven development paradigm, is by no means impossible. It’s just important to keep realistic aims in mind – automation of repetitive tasks first, for example, or simply having stable regression tests for core features that run as soon as a new piece of the software becomes available. Sure, there are difficulties with functional testing, but these can be avoided by taking the time to identify clear goals and design and plan the tests. With GUIdancer, we see that the tester to developer ratio on a project is as low as 1:10, so the support in the tool for structure and planning pays off well.

Having said that, I think that the importance of UI testing does also tends to be forgotten. It is certainly possible to test a great deal with JUnit, for example, and such tests are incredibly important. However, there is also the need to test the application from the user’s perspective. Can the simplest use case be easily completed via the GUI? Can the application be brought into an irregular state by user actions? On another important level – does the application even do what the customer ordered? These are areas where UI testing really shines, and where JUnit alone doesn’t suffice.

Should testing infrastructure be part of a UI toolkit or is this better left to interested third parties?

As an interested third party, I fear my answer may be somewhat biased! There is a certain charm to a centralised test framework, but I think that third parties are better positioned to know how the toolkit is used in practice and what deviations from the standard are common. There is also the argument that different teams and organisations use different approaches to testing and even different skillsets in the testing team. In all aspects, I believe, there is simply too much variation for a central infrastructure.

Have you looked into the scenegraph approach to building UIs in JavaFX? Does it present significant challenges to existing Java UI testing toolkits?

We haven’t looked into it in detail, but I think the scenegraph approach could pose certain challenges, yes. The animation aspects would mean that timing and movement have to be considered in the tests – there would have to be some pretty good synchronization to ensure robustness, I think.

Do you see desktop applications as a dying breed, with all the significant advantages of browser hosted solutions?

Web applications are certainly very popular and I doubt that this popularity is going to end soon. I also doubt, though, that they will completely replace desktop applications. There is still a strong demand for local applications – which do not need the various capabilities (and complications) that come with browser solutions. One reason I quite like desktop applications is because they are generally more ergonomic and usable. I think web applications have a lot of catching up do to in this respect. Local applications have better dialog support and don’t need to be manually refreshed quite as often. I also see many web applications that pack way too many things on one page so that scrolling (in all directions) is unavoidable.

Would you like to share the future plans for GUIdancer?

We are working towards the release of version 3.1 at the moment, which will be released in the second week of July. This release will see the introduction of automated testing for GEF applications, which I’m very excited about! GUIdancer 3.1 will also be compatible with the Eclipse Galileo release. There will be a few more goodies too, like more supported actions on tables and better support for native dialogs. 3.2 and 4.0 are already being planned, with more toolkits and browsers on the list, as well as a test execution manager and more possibilities to manage test data.

Today i’m going to talk about setting up the development environment for running the augmented reality demo shown in this video from my previous post:

Here are the steps:

  • Download and install Java Media Framework (JMF)
  • Download Java3D and place the native / java libraries in the JDK installation folder
  • Download NYArToolkit for Java and unzip somewhere on your machine
  • Download Trident
  • Print out one of the PDF markers in the NYArToolkit/Data. For my demo, i’m using pattHiro.pdf
  • Import all projects under NYArToolkit in your Eclipse workspace. You can ignore the ones for JOGL and QuickTime for this specific demo.
  • In these projects, tweak the build path to point to the JMF / Java3D jars on your system.
  • Plug in your web camera.
  • Run jp.nyatla.nyartoolkit.jmf.sample.NyarToolkitLinkTest class to test that JMF has been installed correctly and can display the captured feed.
  • Run jp.nyatla.nyartoolkit.java3d.sample.NyARJava3D class to test the tracking capabilities of the NYArToolkit. Once the camera feed is showing, point the camera to the printed marker so that it is fully visible. Once you have it on your screen, a colored cube should be shown. Try rotating, moving and tilting the marker printout – all the while keeping it in the frame – to verify that the cube is properly oriented.

The demo from the video above is available as a part of the new Marble project. Here is how to set it up:

  • Sync the latest SVN tip of Marble.
  • Create an Eclipse project for Marble. It should have dependencies on NYArToolkit, NYArToolkit.utils.jmf, NYArToolkit.utils.java3d and Trident. It should also have jmf.jar, vecmath.jar, j3dcore.jar and j3dutils.jar in the build path.
  • Run the org.pushingpixels.marble.MarbleFireworks3D class. Follow the same instructions as above to point the webcam.

There’s not much to the code in this Marble demo. It follows the NyARJava3D class, but instead of static Java3D content (color cube) it has a dynamic scene that is animated by Trident. For the code below note that i’m definitely not an expert in Java3D and NYArToolkit, so there might as well be a simpler way to do these animations. However, they are enough to get you started in exploring animations in Java-powered augmented reality.

Each explosion volley is implemented by a collection of Explosion3D objects. Each such object models a single explosion “particle”. Here is the constructor of the Explosion3D class:

public Explosion3D(float x, float y, float z, Color color) {
   this.x = x;
   this.y = y;
   this.z = z;
   this.color = color;
   this.alpha = 1.0f;

   this.sphere3DTransformGroup = new TransformGroup();
   this.sphere3DTransformGroup
      .setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
   Transform3D mt = new Transform3D();
   mt.setTranslation(new Vector3d(this.x, this.y, this.z));
   this.sphere3DTransformGroup.setTransform(mt);
   this.appearance3D = new Appearance();
   this.appearance3D
      .setCapability(Appearance.ALLOW_TRANSPARENCY_ATTRIBUTES_WRITE);
   this.appearance3D
      .setCapability(Appearance.ALLOW_COLORING_ATTRIBUTES_WRITE);
   this.appearance3D.setColoringAttributes(new ColoringAttributes(color
      .getRed() / 255.0f, color.getGreen() / 255.0f,
      color.getBlue() / 255.0f, ColoringAttributes.SHADE_FLAT));
   this.appearance3D.setTransparencyAttributes(new TransparencyAttributes(
      TransparencyAttributes.BLENDED, 0.0f));
   this.sphere3D = new Sphere(0.002f, appearance3D);
   this.sphere3DTransformGroup.addChild(this.sphere3D);

   this.sphere3DBranchGroup = new BranchGroup();
   this.sphere3DBranchGroup.setCapability(BranchGroup.ALLOW_DETACH);

   this.sphere3DBranchGroup.addChild(this.sphere3DTransformGroup);
}

Here, we have a bunch of Java3D code to create a group that can be dynamically changed at runtime. This group has only one leaf – the Sphere object.

As we’ll see later, the timeline behind this object changes its coordinates and the alpha (fading it out). Here is the relevant public setter for the alpha property:

public void setAlpha(float alpha) {
   this.alpha = alpha;

   this.appearance3D.setTransparencyAttributes(new TransparencyAttributes(
      TransparencyAttributes.BLENDED, 1.0f - alpha));
}

Not much here – just updating the transparency of the underlying Java3D Sphere object. The setters for the coordinates are quite similar:

public void setX(float x) {
   this.x = x;

   Transform3D mt = new Transform3D();
   mt.setTranslation(new Vector3d(this.x, this.y, this.z));
   this.sphere3DTransformGroup.setTransform(mt);
}

public void setY(float y) {
   this.y = y;
   Transform3D mt = new Transform3D();
   mt.setTranslation(new Vector3d(this.x, this.y, this.z));
   this.sphere3DTransformGroup.setTransform(mt);
}

public void setZ(float z) {
   this.z = z;
   Transform3D mt = new Transform3D();
   mt.setTranslation(new Vector3d(this.x, this.y, this.z));
   this.sphere3DTransformGroup.setTransform(mt);
}

The main class is MarbleFireworks3D. Its constructor is rather lengthy and has a few major parts. The first part initializes the camera and marker data for the NYArToolkit core:

NyARCode ar_code = new NyARCode(16, 16);
ar_code.loadARPattFromFile(CARCODE_FILE);
ar_param = new J3dNyARParam();
ar_param.loadARParamFromFile(PARAM_FILE);
ar_param.changeScreenSize(WIDTH, HEIGHT);

Following that, there’s a bunch of Java3D code that initializes the universe, locale, platform, body and environment, and creates the main transformation group. The interesting code is the one that creates the main scene group that will hold the dynamic collection of Explosion3D groups:

mainSceneGroup = new TransformGroup();
mainSceneGroup.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
mainSceneGroup.setCapability(Group.ALLOW_CHILDREN_EXTEND);
mainSceneGroup.setCapability(Group.ALLOW_CHILDREN_WRITE);
root.addChild(mainSceneGroup);

nya_behavior = new NyARSingleMarkerBehaviorHolder(ar_param, 30f,
   ar_code, 0.08);
nya_behavior.setTransformGroup(mainSceneGroup);
nya_behavior.setBackGround(background);

root.addChild(nya_behavior.getBehavior());
nya_behavior.setUpdateListener(this);

locale.addBranchGraph(root);

The NyARSingleMarkerBehaviorHolder is a helper class from the NYArToolkit.utils.java3d project. It tracks the transformation matrix computed by NYArToolkit based on the current location of the marker and updates the transformation set on the main scene group. As you will see later, there is no explicit handling of the marker tracking in the demo code – only creation, update and deletion of the Explosion3D objects.

Finally, we create a looping thread that adds random firework explosions:

// start adding random explosions
new Thread() {
   @Override
   public void run() {
      while (true) {
         try {
            Thread.sleep(500 + (int) (Math.random() * 1000));
            float x = -1.0f + 2.0f * (float) Math.random();
            float y = -1.0f + 2.0f * (float) Math.random();
            float z = (float) Math.random();
            addFireworkNew(x * 5.0f, y * 5.0f, 5.0f + z * 12.0f);
         } catch (Exception exc) {
         }
      }
   }
}.start();

The code in this method computes a 3D uniform distribution of small spheres that originate at the specific location (explosion center) and move outwards. Each Explosion3D object is animated with the matching timeline. The timeline interpolates the alpha property, as well as the coordinates. As you can see, while x and y are interpolated linearly, the interpolation of z takes the gravity into the account – making the explosion particles fall downwards. All the timelines are added to a parallel timeline scenario. Once a timeline starts playing, the matching branch group is added to the main scene graph. Once the timeline scenario is done, all the branch groups are removed from the main scene graph:

private void addFireworkNew(float x, final float y, final float z) {
   final TimelineScenario scenario = new TimelineScenario.Parallel();
   Set scenarioExplosions = new HashSet();

   float R = 6;
   int NUMBER = 20;
   int r = (int) (255 * Math.random());
   int g = (int) (100 + 155 * Math.random());
   int b = (int) (50 + 205 * Math.random());
   Color color = new Color(r, g, b);
   for (double alpha = -Math.PI / 2; alpha <= Math.PI / 2; alpha += 2
      * Math.PI / NUMBER) {
      final float dy = (float) (R * Math.sin(alpha));
      final float yFinal = y + dy;
      float rSection = (float) Math.abs(R * Math.cos(alpha));
      int expCount = Math.max(0, (int) (NUMBER * rSection / R));
      for (int i = 0; i < expCount; i++) {
         float xFinal = (float) (x + rSection
            * Math.cos(i * 2.0 * Math.PI / expCount));
         final float dz = (float)(rSection
            * Math.sin(i * 2.0 * Math.PI / expCount));
         float zFinal = z + dz;

         final Explosion3D explosion = new Explosion3D(x * SCALE, y * SCALE,
            z * SCALE, color);
         scenarioExplosions.add(explosion);

         final Timeline expTimeline = new Timeline(explosion);
         expTimeline.addPropertyToInterpolate("alpha", 1.0f, 0.0f);
         expTimeline.addPropertyToInterpolate("x", x * SCALE, xFinal
            * SCALE);
         expTimeline.addPropertyToInterpolate("y", y * SCALE, yFinal
            * SCALE);
         expTimeline.addCallback(new TimelineCallbackAdapter() {
            @Override
            public void onTimelinePulse(float durationFraction,
               float timelinePosition) {
               float t = expTimeline.getTimelinePosition();
               float zCurr = (z + dz * t - 10 * t * t) * SCALE;
               explosion.setZ(zCurr);
            }
         });
         expTimeline.setDuration(3000);

         expTimeline.addCallback(new TimelineCallbackAdapter() {
            @Override
            public void onTimelineStateChanged(TimelineState oldState,
               TimelineState newState, float durationFraction,
               float timelinePosition) {
               if (newState == TimelineState.PLAYING_FORWARD) {
                  mainSceneGroup.addChild(explosion
                     .getSphere3DBranchGroup());
               }
            }
         });

         scenario.addScenarioActor(expTimeline);
      }
   }

   synchronized (explosions) {
      explosions.put(scenario, scenarioExplosions);
   }
   scenario.addCallback(new TimelineScenarioCallback() {
      @Override
      public void onTimelineScenarioDone() {
         synchronized (explosions) {
            Set ended = explosions.remove(scenario);
            for (Explosion3D end : ended) {
               mainSceneGroup
                  .removeChild(end.getSphere3DBranchGroup());
            }
         }
      }
   });
   scenario.play();
}

The dependent libraries that are used here do the following:

  • JMF captures the webcam stream and provides the pixels to put on the screen
  • NYArToolkit processes the webcam stream, locates the marker and computes the matching transformation matrix
  • Java3D tracks the scene graph objects, handles the depth sorting and the painting of the scene
  • Trident animates the location and alpha of the explosion particles

Over the past few days i’ve been playing with adding animations to augmented reality applications. The code is in a rough shape, since i’m juggling the relevant libraries for the first time, and i’m going to publish it over the next week. Here is the first video that shows my current progress:

What is it using?

I’m not going to take credit for this demo. The vast majority of work is being done by JMF, NYArToolkit and Java3D (and they’re quite painful to set up, by the way). The actual visuals are amateur at best and can look much better. But at least it shows where you can take your Java today :)

Trident is an animation library for Java applications, and this week i’ve written about the concepts behind it and APIs available to interested applications:

What’s next? Version 1.0 (code-named Acumen) is right around the corner. Release candidate is scheduled for June 29, and the final release is scheduled for July 13.

Trident is a new library, and its APIs need to be tested in real-world scenarios. While i have a few small test applications that illustrate the specific API methods, as well as medium sized demos (Onyx and Amber), there is always room for improvement.

Going forward, i intend to evolve Trident, and i already have a couple of post-1.0 features in the pipeline. Trident has evolved from the internal animation layer of Substance look-and-feel, and the next major release of Substance will be rewritten to use Trident – further testing the published APIs for usage in real-world scenarios. In addition, the next major release of Flamingo ribbon will add Trident-based animations – where applicable.

Your input and feedback are always highly appreciated. Download the latest daily bits, and read the documentation. Subscribe to the mailing lists and let me know what is missing, and how the existing APIs can be improved. If you find a bug, report it in the issue tracker. If you want to take a look at the code, check out the SVN repository and subscribe to the “commits” mailing list.

Swing / SWT applications do not have to be boring. Trident aims to simplify the development of rich animation effects in Java based UI applications, addressing both simple and complex scenarios. But it can only be as good as the applications that are using it. So, read the documentation, download the sources / binaries, integrate it in your applications and let me know what you think.