Tuesday, June 3, 2014

Test Strategy in SharePoint: Part 2 – good layering to aid testability

copyright from: blog.goneopen.com

In Part 1 – testing poor layering is not good TDD I have argued that we need to find better ways to think about testing SharePoint wiring code that do not confuse unit and integration tests. In this post, I explain that outline a layering strategy for solutions resolving this problem. Rather than only one project for code and one for tests, I use 3 projects for tests and 4 for the code – this strategy is based on DDD layering and the test automation pyramid.
  • DDD layering projects: Domain, Infrastructure, Application and UI
  • Test projects: System, Integration and Unit
Note: this entry does not give code samples – the next post will – but focuses how the projects are organised within the Visual Studio solution, how they are sequenced when programming. I’ve included a task sheet that we use in Sprint Planning that we use a boiler plate list to mix and match scope of features. Finally, I have a general rave on the need for disciplined test-first and test-last development.

Projects

Here’s the quick overview of the layers, take a look further down for fuller overview.
  • Domain: the project which has representations of the application domain and have no references to other libraries (particularly SharePoint)
  • *Infrastructure: *this project references Domain and has the technology specific implementations. In this case, it has all the SharePoint API implementations
  • Application: this project is a very light orchestration layer. It is a way to get logic out of the UI layer to make it testable. Currently, we actually put all our javascript jQuery widgets in the project (that I will post about later because we unit (BDD-style) all our javascript and thus need to keep it away from the UI)
  • UI: this is the wiring code for SharePoint but has little else – this will more sense once you can see that we Integration test all SharePoint API and this code goes to Infrastructure and that we unit test any models, services or validation and that we put these in Domain. For example, with Event Receivers code in methods is rarely longer than a line or two long.

Test Projects

  • System Acceptance Test: Business focused test that describes the system – these tests should live long term reasonably unchanged
  • System Smoke Test: Tests that can run in any environment that confirm that it is up and running
  • Integration Test: Tests that have 1 dependency and 1 interaction that are usually against third-party API and in this case mainly the SharePoint API - these may create scenarios on each method
  • Unit Test: Tests that have no dependencies (or are mocked out) – model tests, validations, service tests, exception handling

Solution Structure

Below is the source folder of code in the source repository (ie not lib/scripts/tools/). The solution file (.sln) lives in the src/ folder.
Taking a look below, we see our 4 layers with 3 tests projects. In this sample layout, I have include folders which suggest that we have code around the provisioning and configuration of the site for deployment – see here for description of our installation strategy. These functional areas exists across multiple projects: they have definitions in the Domain, implementation in the Infrastructure and both unit and integration tests.
I have also included Logging because central to any productivity gains in SharePoint is to use logging and avoid using a debugger. We now rarely attach a debugger for development. And if we do it is not our first tactic as was the previous case.
You may also notice Migrations/ in Infrastructure. These are the migrations that we use withmigratordotnet.
Finally, the UI layer should look familiar and this is a subset of folders.
src/
  Application/

  Domain/
    Model/
     Provisioning/
     Configuration/
  Services/
     Provisioning/
     Configuration/
    Logging/
  
  Infrastructure/
    Provisioning/
    Configuration/
    Logging/    

  Tests.System/
    Acceptance/
     Provisioning/
     Configuration/
    Smoke/
  
  Tests.Integration/
    Fixtures/
    Provisioning/
    Configuration/
    Migrations/

  Tests.Unit/
    Fixtures/
    Model/
    Provisioning/
    Configuration/
    Logging/
    Services/
    Site/    
    
  Ui/
    Features/
    Layouts/
    Package/
    PageLayouts/
    Stapler/
    ...

Writing code in our layers in practice

The cadence of the developers work is also based on this separation. It generally looks like this:
  1. write acceptance tests (eg given/when/then)
  2. begin coding with tests
  3. sometimes starting with Unit tests – eg new Features, or jQuery widgets
  4. in practice, because it is SharePoint, move into integration tests to isolate the API task
  5. complete the acceptance tests
  6. write documentation of SharePoint process via screen shots
We also have a task sheet for estimation (for sprint planning) that is based around this cadence.
Task Estimation for story in Scrum around SharePoint feature

A note on test stability

Before I finish this post and start showing some code, I just want to point out that getting stable deployments and stable tests requires discipline. The key issues to allow for are the usual suspects:
  • start scripted deployment as early as possible
  • deploy with scripts as often as possible, if not all the time
  • try to never deploy or configure through the GUI
  • if you are going to require a migration (GUI-based configuration) script it early because while it is faster to do through the GUI this is a developer-level (local) optimisation for efficiency and won’t help with stabilisation in the medium term
  • unit tests are easy to keep stable – if they aren’t then you are serious in trouble
  • integration tests are likely to be hard to keep stable – ensure that you have the correct setup/teardown lifecycle and that you can fairly assume that the system is clean
  • as per any test, make sure integration tests are not dependent on other tests (this is standard stuff)
  • system smoke tests should run immediately after an installation and should be able to be run in any environment at any time
  • system smoke tests should not be destruction precisely because they are run in any environment including production to check that everything is working
  • system smoke tests shouldn’t manage setup/teardown because they are non-destructive
  • system smoke tests should be fast to run and fail
  • get all these tests running on the build server asap

Test-first and test-last development

TDD does not need to be exclusively test-first development. I want to suggest that different layer require different strategies but most importantly there is a consistency to the strategy to help establish cadence. This cadence is going to reduce transaction costs – knowing when done, quality assurance for coverage, moving code out of development. Above I outlined writing code in practice: acceptance test writing, unit, integration and then acceptance test completion.
To do this I test-last acceptance tests. This means that as developers we write BDD style user story (give/when/then) acceptance tests. While this is written first, it rarely is test-driven because we might not then actually implement the story directly (although sometimes we do). Rather we park it. Then we move into the implementation which is encompassed by the user story but we then move into classical unit test assertion mode in unit and integration tests. Now, there is a piece of code that it clearly unit testable (models, validation, services) this is completed test first – and we pair it, we use Resharper support to code outside-in. We may also need to create data access code (ie SharePoint code) and this is created with integration tests. Interestingly, because it is SharePoint we break many rules. I don’t want devs to write Infrastructure code test last but often we need to spike the API. So, we actually spike the code in the Integration test and then refactor to the Infrastructure as quickly as possible. I think that this approach is slow and that we would be best to go to test-first but at this stage we are still getting a handle on good Infrastructure code to wrap the SharePoint API. The main point is that we don’t have untested code in Infrastructure (or Infrastructure code lurking in the UI). These integration tests in my view are test-last in most cases simply because we aren’t driving design from the tests.
At this stage, we have unfinished system acceptance tests, code in the domain and infrastructure (all tested). What we then do is hook the acceptance test code up. We do this instead of hooking up the UI because then we don’t kid ourselves whether or not the correct abstraction has been created. In hooking up the acceptance tests, we can simply hook up in the UI. However, the reverse has often not been the case. Nonetheless, the most important issue that we have hooked up our Domain/Infrastructure code by two clients (acceptance and UI) and this tends to prove that we have a maintainable level of abstraction for the current functionality/complexity. This approach is akin to when you have a problem and you go to multiple people to talk about it. By the time you have had multiple perspectives, you tend to get clarity about the issues. Similarly, in allowing our code to have multiple conversations in the form of client libraries consume them, we know the sorts of issues are code are going have – and hopefully, because it is software, we have refactored the big ones out (ie we can live the level of cohesion and coupling for now).
I suspect for framework or even line of business applications, and SharePoint being one of many, we should live with the test-first and test-last tension. Test-first is a deep conversation that in my view covers off so many of the issues. However, like life, these conversations are not always the best to be had every time. But for the important issues, they will always need to be had and I prefer to have them early and often.
None of this means that individual developers get to choose which parts get test-first and test-last. It requires discipline to use the same sequencing for each feature. This takes time for developers to learn and leadership to encourage (actually, enforce, review and refine). I am finding that team members can learn the rules of the particular code base in between 4-8 weeks if that is any help.

No comments: