Tuesday, March 12, 2013

UI Testing a Sencha App

copyright: https://www.sencha.com/blog


A few months ago, I wrote a post titled Automating Unit Tests that covered how developers could write unit tests for their business logic and validate their JavaScript syntax. Understanding these concepts is essential when building an enterprise application: bugs must be caught before changes are pushed into production or catastrophic consequences may follow.
The one area that I did not cover in that post was the idea of “UI Testing” (also known as Integration Testing). After reading many of the comments to that post and hearing the community’s feedback, I wanted to address this topic by adding UI Tests to my demo Ext JS application and discussing strategies for enterprise application testing.

UI Testing: An Overview

As I mentioned in my earlier article, UI Tests are not the same thing as Unit Tests—but more often than not these ideas are confused.
UI Tests attempt to subjectively verify that elements on the screen behave (and often look) as expected, both statically (i.e. the flat render) and also dynamically (i.e. as users perform given actions). Unit Tests attempt to isolate small pieces of code and objectively verify application logic.
The key difference here is the subjective vs. objective nature of the tests.
Taking this idea a step further, we can break UI Tests into their constituent parts: QA Tests andComponent Tests.
  • QA Tests simulate real-world interactions with your application as if a user is using the app.
  • Component Tests isolate independent (often reusable) pieces of your application to verify their display and behavior.
In this article, we’ll take a look at both types of UI Tests.

Common Problems with UI Testing

Selecting the Right Tool

Testing the look and the complex interactions of an application is an enormous task—and it’s no surprise that many web developers struggle to implement (and often abandon) UI Tests that adequately solve QA problems.
Perhaps the biggest hurdle that developers must address is selecting the best tool for the job. Some tools rely on XPath or CSS selectors to navigate the application; others require complicated server configurations to allow for test automation. At the end of the day, it is vitally important to select a tool that is flexible; QA Tests may be frequently rewritten as business requirements and as the app changes, so it’s important that the tests are easy to build and maintain.
In my experience, I have only found three tools that can address the needs for writing QA Tests against a Sencha application:
Each of these tools has specific pros and cons; having said that I personally think Siesta is the easiest of them to setup, and Siesta’s API works seamlessly with Sencha’s frameworks.
Disclaimer: Other UI testing tools exist, and I do not claim to have used all of them; nor do I claim that Siesta is necessarily the “best” tool in this space. I simply offer an opinion based on my own experiences.

Mocking Data

One other important (and often overlooked) problem with UI Tests is that tests should typically not be written against live APIs. Regardless of the source, APIs are inherently unreliable: servers go down, networks experience latency, and unexpected bugs occur.
The purpose of UI Tests is to verify display and behavior—not that specific data exist at any given time. UI Tests should mock API data when possible, but this is not always an easy problem to solve. Some implementations use static data in place of AJAX requests; others redirect network calls to a mock API.
More than a handful of JavaScript mocking libraries exist, but I am not going to dive too deeply into the subject in this post. I simply want to raise awareness of the issue because it is a common source of frustration. I chose to use Sinon.js to stub the API calls for my sample application.
Having said all of that, you may have situations where you do want to use the live API—but I would normally only recommend that to verify a release is ready to be pushed live.

Sample Application

In the sample Ext JS application, you’ll find a /ui-tests/ folder containing both QA (/app/) and Component (/ux/) tests written with Siesta.
Siesta Example

QA Testing

Under the /app/ folder, you can view the index.html file in your browser to see the Siesta interface with our QA Tests in place. The goal here is to launch the actual application and test the real-world interactions that users would expect. While the sample application is a relatively simple example, the two QA Tests demonstrate various ways in which we can test the behavior of the app as a whole.
The first test, titled “Test tabs for data in grids” (/app/tests/01_tabs.js) simply loads the application and checks to make sure the required views are correctly in place. Although this particular example is rudimentary, this sort of test could be very useful if your application dynamically created its interface based on user roles, preferences, or some other logic.
The second test, titled “Test double-click functionality” (/app/tests/02_RsvpWindow.js) again loads the entire application. This time, we begin to simulate interactions with our grids and tabs to ensure the desired behaviours execute as expected.
Take note of the fact that I use Sinon.js to stub the JsonP requests for these data stores. Doing this allows me to test the functionality of the application without the assumption that the live API is accessible and working properly. There are a number of ways you could accomplish this; I chose to override the behavior of Ext.data.JsonP.request() to automatically return mock data (see /ui-tests/app/api_stub.js).

Component Tests

Under the /ux/ folder, you can view the index.html file in your browser to see the Siesta interface with our Component Tests in place. Unlike our QA Tests, the goal in this case is to isolate individual components and test their behaviors. By testing the components outside of the greater application, we can isolate known bugs and guarantee future compatibility.
The only test I’ve written for this sample application (/apuxp/tests/01_RsvpWindow.js) examines the display and behavior of our RsvpWindow view. This view extends the Ext.window.Window class, and contains a grid with a custom column renderer. Using Siesta, we create an isolated instance of this view and verify the cell renderer behaves as expected.
 
   var defaultWin = Ext.create('ChicagoMeetup.view.RsvpWindow');
 
   //I know there are only 2 rows in this grid because I mocked the API
   var firstRow = Ext.get(grid.getView().getNode(0)),
       secondRow = Ext.get(grid.getView().getNode(1));
 
   var regExp = /^(;
   var innerHtml = firstRow.query('.x-grid-cell-inner')[1].innerHTML;
 
   t.is(regExp.test(innerHtml), true, 'First row should contain an image in the second column');
 
   innerHtml = secondRow.query('.x-grid-cell-inner')[1].innerHTML;
   t.is(regExp.test(innerHtml), false, 'Second row should NOT contain an image in the second column');
 
Using Siesta’s robust testing API, we could simulate a variety of other interactions with this component (click, drag, etc). Although the RsvpWindow component isn’t all that exciting, you can imagine the possibilities for your own custom UX classes.

Conclusion

Building unit tests for a web application can be a difficult task, but when done correctly the payoff for your efforts is invaluable. In closing, I would like to reiterate these important points:
  • Unit Tests and UI Tests are not the same thing. Both are valuable ways to maintain stable code, but they solve different problems.
  • Mind your syntax. Just because your code runs correctly in one browser doesn’t mean it will run correctly in every browser.
  • Test your custom components. It’s alright to assume your framework works as expected—but don’t assume your UX is written correctly.
  • Don’t shoot for 100% code coverage. You may want to test the entire application, but beware the cost of maintaining an elaborate test suite.
This series of posts on unit testing is based on my own personal experience helping Sencha’s customers solve common problems. You can learn more in a webinar I'll be hosting January 31st alongside Mats Bryntse that introduces developers to realistic methods for testing your Ext JS and/or Touch applications. Registration is available here. In addition, I invite you to share your own thoughts and experiences below—we can help each other make web application testing an easier goal to achieve.
UPDATE (1/2013): I wrote a follow-up post titled UI Testing a Sencha App which expands upon the content in this article. I have also updated the sample project on GitHub - so there are some slight differences in the examples posted here compared to the updated repo, although the concepts are exactly the same. If you have questions or comments, please start a thread on the Sencha forums and I'll be sure to respond!

Automating Unit Tests


copyright from: http://www.sencha.com/blog/
One of the first questions I always hear when starting with a new client is “How can I build unit tests for my application?”
It’s obvious that many people understand the benefits of unit tests—developers want to minimize the number of bugs in their code and managers want to reduce the amount of time required to test an application before release. Although the concept of unit testing has existed for years, software teams are only now beginning to explore building tests for their Rich Internet Applications (RIAs).
In my experience, it’s obvious that most people don’t fully understand what it means to build unit tests for RIAs. Considering there are few tutorials explaining how to accomplish this, I wanted to share a recent experience with the community in hopes that it will stimulate ideas and discussion.
Keep an open mind as you follow along with this post and think about how these examples might fit into your own processes. This implementation is one of any number of possible solutions, and I don’t attempt to solve every problem. That being said, implementing unit tests into your project will produce cleaner, more stable code and will reinforce coding standards throughout your organization.

I. Unit Testing Web Applications: An Overview

Layers Diagram
In the diagram above, the “Presentation Layer” represents the user interface - which in the case of RIAs is frequently a JavaScript-driven DOM. This is the case for most Ext JS and Sencha Touch applications.
When building unit tests for RIAs, it’s important to remember that the user interface (the “presentation layer”) is very different from the supporting layers of the application.
The UI accepts random input from the user and is expected to look AND behave the same across many environments (which is both subjective AND objective). One language (JavaScript) performs multiple tasks (logic and presentation). Because its code is never compiled, syntax errors in JavaScript are often not discovered until runtime.
Thus developers more familiar with server-side unit tests often fail to realize the complexity involved in writing unit tests for JavaScript-driven applications. As a result, we cannot build unit tests for the presentation layer in the same way we would for the underlying data-driven tiers of an application.
Because the presentation layer has multiple responsibilities, it is important to separate unit tests into the following three areas:
  • Syntax Checks – Syntax checks are not exactly unit tests because there is no logic being tested. Instead we analyze our code for syntactical errors – errors that may or may not be caught before application execution.
  • Unit Tests – True to its contemporary meaning within software development, unit tests are a series of objective logic tests. Unit tests attempt to isolate small pieces of our code: given specific inputs, unit tests expect specific outputs. In this manner unit tests are essentially mathematical proofs (remember those from high school?) that confirm what our business logic is supposed to do.
  • UI Tests (aka Integration Tests) – UI Tests are not the same as unit tests. UI Tests attempt to subjectively verify that elements on the screen behave (and/or look) as expected when a user performs a given action. There is no math or logic involved: the tests render the environment as a whole (containing all runtime dependencies) and wait to verify that the DOM has changed in a desired way. You can think of these tests as a robot manually testing the application.
I do not attempt to solve the problem of UI Tests in this post, but there are helpful links are included at the end if you're interested.

II. A Sample Application

The key to implementing unit tests is to automate the process - if the tests aren’t run every time the code changes (or at some regular interval), the tests become ineffective and useless.
I have created a sample project that uses shell scripts to run each individual part (syntax checks and unit tests). These scripts return status codes upon success and failure.
The specific process to integrate these automated tests may be quite different at every company. The shell scripts might provide automation by:
  • being integrated into a build process (e.g. via Ant)
  • being integrated into source control hooks (e.g. a Git pre-commit hook on your local machine)

III. Pre-Requisites

The sample project (and the concepts explained below) assume a certain degree of knowledge on the following tools:
  • Sencha SDK Tools (v2.0.0b3) - a suite of automation tools assisting with application creation, development and deployment
  • PhantomJS (v1.4.1) - a headless WebKit browser controller via an easy JavaScript API
  • JSLint (v2012-05-09) - a JavaScript code quality tool
  • PhantomLint (v1.2.1) - an add-on to Phantom.js that automatically lints all JavaScript files in a project
  • Jasmine (v1.2.0) - a behavior-driven development framework for testing JavaScript code
The shell scripts in the sample project were targeted towards Mac OSX/Linux users. However, Windows batch (.bat) files or Ant scripts can easily accomplish the same thing and should be easy to create.
Also, if you're new to the world of unit testing I would highly recommend the book Test Driven JavaScript Development by Christian Johansen.

IV. Syntax Checks

While there are a variety of syntax-checking tools available for JavaScript, perhaps the most widely recognized is JSLint. Those of you familiar with the tool should remember that “JSLint will hurt your feelings” - it complains about everything! On the other hand, it is also highly configurable and helps our code be clean, stable and consistent.
Because JSLint is written in JavaScript, it is traditionally run in the browser on Douglass Crockford’s website (http://www.jslint.com). Although this manual process is sometimes convenient, it is difficult to run against multiple files and impossible to automate.
A better solution is to use PhantomJS - a headless WebKit browser that provides the web environment necessary to run JSLint and our own JavaScript code. Additionally, PhantomJS can access the filesystem; this allows us to execute multiple files against JSLint and report back some success or failure status.
To further simplify the process of checking all of our JavaScript files, I have incorporated a pet project of mine called PhantomLint which logs errors to an output file. In this way, a failure status can alert developers to take corrective action.
Taking a look in the sample project under /tests/, you should see a shell script named “run_lint.sh”. This script launches the PhantomJS environment and initializes PhantomLint for us (JSLint-Runner.js). Given its configuration options, PhantomLint then dives into the filesystem to test our code against JSLint. Any errors are then output to a text file.
If you run the shell script, you should notice a single error in our application code:
 
../app/MeetupApiUtil.js
   Line #: 37
   Char #: 21
   Reason: Unexpected 'else' after 'return'.
 
In this case, JSLint tells us that we have a redundant “else” statement.

V. Unit Tests

There are a variety of JavaScript unit test frameworks currently available, and I chose Jasmine for this sample app. Why Jasmine? It is one of the more popular frameworks currently available, it has an active development community, and Sencha uses it internally to test our own code. Jasmine also has a very fluent API - this makes our test suite easy to understand. By nesting test conditions appropriately, we can build very powerful and descriptive test suites.
The tests can be run locally by visiting the “spec_runner.html” file under the /tests/ folder in one’s browser. The individual test files are located under the /tests/app/ directory.
Let’s start by looking at the class ChicagoMeetup.MeetupApiUtil. This class is ideal for unit testing because it has very few dependencies and doesn’t directly interact with the DOM; it is simply a collection of utility methods that perform logical operations.
Taking a look at the tests inside /tests/app/MeetupApiUtil.js you can see how the unit tests objectively analyze the methods for this class. Each test condition provides a specific input value and expects a specific output value.
 
describe('ChicagoMeetup.MeetupApiUtil', function() {
 
    describe('getUsersUrl() method', function() {
 
        it('should be a function', function() {
            expect(typeof ChicagoMeetup.MeetupApiUtil.getUsersUrl).toEqual('function');
        });
 
        it('should return a string', function() {
            expect(typeof ChicagoMeetup.MeetupApiUtil.getUsersUrl()).toEqual('string');
        });
 
    });
 
    //etc...
});
 
The ChicagoMeetup.MeetupApiUtil class and the related unit tests are a terribly simple example - in fact there’s very little logic involved. More often than not, we want to build unit tests for custom components in our Sencha applications. How can we achieve this?
Consider the class ChicagoMeetup.view.Events in our sample application. This is a custom class, extended from Ext.grid.Panel, that contains some specific methods and behavior for our app.
In our test code (/tests/app/view/Events.js) we first create setup and teardown methods that provide fresh instances of our custom component for each test case. We do this to avoid polluting our test environment with abandoned objects and DOM elements.
 
describe('ChicagoMeetup.view.Events', function() {
 
    //reusable scoped variable
    var eventGrid = null;
 
    //setup/teardown
    beforeEach(function() {
        //create a fresh grid for every test to avoid test pollution
        eventGrid = Ext.create('ChicagoMeetup.view.Events', {
            renderTo : 'test' //see spec-runner.html to see where this is defined
        });
    });
 
    afterEach(function() {
        //destroy the grid after every test so we don't pollute the environment
        eventGrid.destroy();
    });
 
    it('should inherit from Ext.grid.Panel', function() {
        expect(eventGrid.isXType('grid')).toEqual(true);
    });
 
    //etc...
});
 
It is important to note that we temporarily render our custom component to the DOM for each test (via the “renderTo” config). We do this in order to test any logic that might depend on the DOM – but the key is to destroy these components in-between tests so the greater test environment is not polluted (negatively affecting subsequent tests).
Although we are rendering the component to the DOM, I have to re-emphasize that we are not using Jasmine to build UI (or integration) tests. Jasmine doesn’t care what the components look like - our unit tests are only here to analyze the business logic.
Now that we understand how to properly unit test our components, the next step is automating this process. Similar to how we automated our syntax checks, we will again use PhantomJS to run our unit tests and output failure messages to a log file.
Taking a look in the sample project under /tests/, you should see a shell script named “run_jasmine.sh”. This script launches PhantomJS and initializes the Jasmine parser for us (Jasmine-Runner.js). After our tests have run, any test failures are output to a text file.
If you run the shell script, you should notice a single test failure:
 
Spec: linkRenderer() method
Description:
   - should return a string (HTML link snippet)
Failure Message:
   - Expected 'string' to equal 'function'.
 
In this case, Jasmine tells us that a test against linkRenderer() failed. We expected that this method would return a function, but our test case encountered a string. This particular example fails because the spec incorrectly expected “function” - which demonstrates that unit tests can contain errors themselves!
 
describe('linkRenderer() method', function() {
 
    it('should return a string (HTML link snippet)', function() {
        var testUrl = 'http://www.sencha.com';
 
        expect(typeof eventGrid.linkRenderer).toEqual('function');
        expect(typeof eventGrid.linkRenderer(testUrl)).toEqual('function'); //THIS SHOULD FAIL! We should have written toEqual('string')
 
        //TODO: more robust regular expression checks to ensure this is *actually* an HTML link tag, correctly formatted
    });
 
});
 
It is unrealistic and nearly impossible to achieve 100% code coverage. Because unit tests are designed to test small pieces of code in isolation, it becomes very difficult to write good tests for pieces with many dependencies or major DOM interactions. Try to focus on problem areas or base classes, and write tests that check for bugs as they pop up (to prevent regressions).

VI. UI Tests

As I mentioned earlier, I am not attempting to solve the problem of UI Tests in this post. Testing the look and the complex interactions of an application is a humongous task, and is a topic better suited for its own dedicated tutorial.
That being said, here are some of my thoughts on accomplishing UI Tests:
  • Building UI tests for dynamic RIA’s is sufficiently difficult because the DOM is always changing in unpredictable ways. Tools like Selenium use XPATH or CSS selectors to verify “success” – but because the DOM is unpredictable, the tests written for these tools become very brittle. Thus maintaining your tests often takes more time than creating new features.
  • Many prominent UI developers prefer to use “specification” rather than “UI tests” for exactly these reasons. A good resource for you is the book “Specification by Example”, in which tools like Cucumber are recommended.
  • Siesta is becoming a more popular tool for automated UI testing and is certainly worth a look.

VII. Conclusion

Unit tests are an important part of the software development process. By clearly defining what we expect from our code, unit tests allow us to develop a high degree of confidence that our applications will function as intended. Automating our unit tests reduces the number of bugs and decreases the amount of time we need to spend manually testing our code.
Take a look at the sample project on GitHub and consider how this solution might integrate with your own processes. This post is meant to foster ideas and discussion within the community - so please chime in with questions, comments and solutions of your own! Feel free to fork the sample project and add or edit the examples!