Many of our customers come to us with a clear target and want us to help them achieve it. Often they have set their sights too low, and we try to convince them to aim higher.

This is the story of one customer with no in-house development capabilities, and whose previous software development experiences had been underwhelming.  They had employed a development company to deliver a major revamp of the core customer and payment system. The project had been going for some time and the product was perpetually delayed and their patience was gone.  They took on a new project management organisation which was carefully managing their expectations, moving the customer towards a more agile approach. They also had new development leadership whose analysis of the implementation was damning. The customer accepted that they would need a reboot, but their tolerance for risk was minimal. We were brought in to provide a full test automation solution.

The management company’s priority was to keep the customer’s expectations well within what was possible and eliminate any risk of promising more than could be achieved. Our priority was to show how much further a good build and test regime could safely go.

Looking for Early Successes

Though the project was a complete revamp, the business requirements hadn’t changed. The existing team members were way ahead of us in domain knowledge. In order to deliver value as soon as possible, we isolated the area the project managers and customer considered the greatest risk and planned to deliver tests solely for it. We would expand our knowledge of the domain through continuous collaboration with the project’s subject matter experts and developers. The project managers agreed we would not plan beyond that first area of testing, as too much was likely to change early in the project.

Our Initial Scope

We needed to quickly produce regression tests that would allow other teams to continue their work without having to manually rerun large numbers of existing tests every time a new feature was merged into the project’s source. Initially, the customer insisted that we focused on UI-driven regression tests, as they believed it would reduce the amount of regression testing required. As early development was going to be restricted to the most complicated and highest risk features, our early scope was limited in the same way. The agreed aims were to prove that all work flows had at least one working path through them, to cover each functional area with at least one UI regression test where possible, and to fully cover with UI tests any areas that could not feasibly be tested below the UI layer.

Project management accepted that if the architecture of the new application changed to allow for service testing (bypassing the UI), we would add service tests to the scope. System tests would supplant UI-driven functional tests where feasible. Obviously, we planned to encourage the new delivery team to design testability into the revamped project, so that slow and brittle UI tests could be eliminated where possible.

At the time we joined the project there was minimal continuous integration. There was a build server that ran all the unit tests after each compile, but those compiles were triggered manually. The project management organization did not want to budget for the time to adopt continuous integration, so we agreed to deliver only repeatable tests that could be run in batches, would run without intervention and would produce reports suitable for use by all team members. “Continuous integration ready”, as it were.

The First Few Weeks

When we joined as the new “Test Automation Team” for the project, there was not yet a clear plan. No one was sure if the old application would be improved, strangled or dropped. We needed to show results as soon as possible, so we wrote our initial tests over the old, partially-functional application. It was not designed for testing and we were forced to drive our tests through the web front-end. We started in the highest-risk area, which used formulas defined in wordy laws and legal appendices. The most valuable outputs of the exercise were a rapid familiarisation with that area of the domain, and good test assets in the form of simple CSV files for our data-driven tests. There were valuable assets for proving vital components of the revamped application, no matter what form it took.

While we continued with this, the decision was made to extract this component from the old application into a standalone service. This gave us a single binary (in this case, a .NET assembly) and API that we could create new tests against and which would also be used to build the new application. We now had our first opportunity to move checks down from the user interface and into integration tests.

As the automation team were not trained developers, the management company were concerned that any attempt to write tests in C# would result in frequent calls for assistance that would delay the development team. We agreed to write our tests using PowerShell, which can easily call into .NET assemblies and which the development team had no experience with. While we felt this would delay the overall delivery of the product, the concerns of the management company and the customer were too great to ignore. Deliberately working in a language different than that of the core development team incurred the risk of missing or misunderstanding development decisions. The decision was made to not form cross-functional teams, which would have eliminated this risk. Instead, we partially mitigated it by making me a member of both teams, collaborating on all planning and estimation tasks, and sharing as much information as I could.

Within weeks we had duplicated all the UI tests written against the old application as new integration tests written against the new service. While the rest of the team expanded the test coverage and their domain knowledge, I focused on continuous integration. First I installed Jenkins locally, hooked it up to the source code, and wrote a little XSLT that allowed Jenkins to publish our test results. Then I hooked this up to the recently-adopted Atlassian cloud applications we were using: build status were echoed to HipChat, and full build results were published to new Confluence pages after every build. Once this was all working, I demonstrated it to the various teams working on the project, and suggested that we set up Jenkins to replace our existing build server. Within days, the development team had all their builds running through Jenkins. Within weeks, everyone was comfortable using Jenkins for distributed builds, static analysis, build and test reporting, and triggering deployments to test environments via Octopus Deploy.

Looking Forward

The project was already benefiting in many ways from the improved build and test regime. Build times were down thanks to the relative speed of integration tests. Compile and test failures were noticed and resolved sooner thanks to continuous integration. Collaboration was increased both because of improved reporting and a well-used API shared outside of the core development team. Time spent on productive specification and development work was increased because the time spent on building, deploying and regression testing was all but eliminated. We knew that the teams were only beginning to see the potential improvements. I set a new goal: to grow the test automation team’s appetite for change into a culture of continuous improvement across the whole organisation.

The next part of this story shows how we took advantage of these time savings. We were now able to move from a collection of useful compromises to a coherent and continuously evolving automated build and test platform which supported each team’s way of working and encouraged experimentation and improvement.