Testing Scout Applications

In our last post we did announce our EclipseCon Boston talks about building mobile apps with Scout and testing Scout applications. In this post we will elaborate a bit on the testing talk to make sure it gets the attention it deserves :-)

For the discussion about testing Scout applications, it is helpful to have a rough understanding of the architecture of a Scout client server application.











The Scout client Application (on the left side in the above picture) contains a UI plugin , a client plugin, (orange) and a shared plugin (green).

The UI plugin (the white cube) renders the UI part of the modeled client using a specific technology, (Swing, SWT, or RAP). The modeled client UI (the orange cube) contains a client plugin that is independent from the UI library used. A shared plugin (the green cube) containing common resources such as service interfaces, data transfer objects, text translations, … is shipped in both the client and the server application. The server plugin (the blue cube) contains server logic such as persistence.

A possible data flow (the red dotted line) is for example:

  1. Fetching data from the server
  2. Putting the data it into transfer objects
  3. Sending the transfer objects to client
  4. Importing the data from the transfer objects into the UI

Then, user input may flow back to the server using the same path in the other direction.

To follow the discussion below we need to add some more details. As shown below, your application plugins depend on plugins from the underling Scout framework. We can also get an idea which application parts communicate through the framework layer.











What Should I Test?

Usually, everybody agrees on the importance of automated tests. However, agreeing on the correct granularity is not always easy (unit test, function tests, integration tests, UI automatized tests…).

In an environment with limited resource and time I suggest to set the focus of your automated tests on your code, your business logic and your use cases. This implies, that testing the framework itself should not be your top priority.

Start with defining a clear limit for your tests and start with the simple things first. Use JUnit tests for your (manually written) code that is independent from the framework and doesn’t have external dependencies. This works great for utility classes. You can even use this approach to testing as a desing guideline (or as a hint to refactor existing code).

Whe we assume that we have implemented some input validation logic in the application’s shared plugin we can use JUnit for automatically testing this without any external dependencies. This scenario is illustrated in the following diagram.












But sometimes you might want to test a larger context of the use case that also depends on the Scout framework. In this situation, the difficulty is to identify the system boundaries for your test: it might be easier to mock away (with the yellow cube) the server and to fake its response, rather than putting test data in a database to produce the desired test setup.












Another approach might UI black box testing. In this case the whole software is considered as a black box. The goal of these tests is to ensure that the workflows and use cases required can be performed correctly. For this matter a tool like Jubula can be used.












Interested? Meet us in our talk: Testing a Scout Application with JUnit and Jubula. We would love to hear from your ideas and experience.

See you in Boston.


Scout Links: Project Home, Forum, Wiki, Twitter, Instagram

Ein Kommentar zu diesem Thema

  1. Jeremie Bresson
    schrieb am 26. March 2013 um 22.05 Uhr

    More informations on the wiki: