The reason I was inspired to write this series is because I recently came across the test strategy and test plan documents I wrote for the customer during the first days of the engagement. They had sat unused in an email folder since originally delivered, as such documents so often do. Rereading them was fascinating. I saw where I’d included plans for work I would normally advise against, because the customer had been insistent it was needed. I noted I had toned down good recommendations to fit within what I believed the customer would, at that time, find acceptable. And, most tellingly, there was no mention of ninety percent of what I consider the stand out achievements of our time on site with the customer. The customer was simply too averse to taking on new or different approaches that I believe those suggestions wouldn’t have been well received.
Part 1 of the series explained how we worked with the customer to mitigate their initial worries. Part 2 describes ways we went beyond what we initially promised in order to speed the project up and introduce a modicum of kaizen. In this final part, I’ll list the ideas in those strategy and plan documents that I find most interesting in retrospect, describe how we initially achieved them and what later became of them.
At the time the documents were written, the future of the project was unclear. The old product was still in development and there was not yet any talk that it would be dropped, but everyone was sure things weren’t working and big changes were coming. The documents emphasize that the level of uncertainty prohibited too much up-front planning. Instead, they make several broad recommendations about how and when to plan.
- We would adopt very short sprints initially, so that we could plan more often for shorter periods. We would increase the duration of sprints as project volatility decreased.
- We would let the team choose their own planning rituals, rather than dictate to them.
- We would maintain a backlog of work that was big enough to fill at least the remainder of the current sprint. If there was less work in the backlog, filling it would be higher priority than taking items from it.
- We would not attempt any low level designs. If a low level design was ever needed, it was to be in the form of a prototype (“steel thread” or “tracer bullet”) implementation.
- We would prioritize automating time-consuming “busy work” over automating the testing of new features or stories.
All these decisions came from my preference for lean and agile approaches to work, though I never used either term in the documents. Most of these recommendations were followed, though in different ways at different times. I capitalized on the uncertainty about project direction in order to push for a shorter planning loop. Initially we had three-day sprints, giving us the benefit of frequent short retrospectives. Soon we moved to one week sprints, which fitted better with people’s natural work cadence. And within a few months, the development team switched from four week sprints to three week ones; at that point, we discarded our sprints entirely and merged with theirs. By then, the backlog of work we had been maintaining could also be discarded and we moved to use the development team’s scrum backlog. The product owner, who sat on the development team, maintained the backlog very well, and we were happy to leave that work to the expert. Towards the end of the project, with no new development planned and everyone very comfortable with their workload and expected outputs, we dropped all team rituals and relied on informal chats for all planning and communication.
The point about low level designs was very helpful while we discovered our patterns and extended our development frameworks. It also fits very well with the behaviour driven design philosophy of preferring examples over rules. As our automation patterns improved, I would hack out a few examples, rework them into a sensible template, and demonstrate a refactored example to the team for adopting or rejecting.
The last point was about prioritizing automating whatever was taking time away from testing. I’m not sure the audience of the documents picked up on that, but the intention was to institute good continuous integration practices, automate deployment and test execution, and eliminate the large amount of tedious reporting that plagued the project before we joined. We were not brought in with that in mind, but I believe that was the biggest improvement we delivered. You can read more about that in the previous post.
In the original failing project, the organization followed a traditional, siloed development pattern and suffered from the associated poor morale. Everyone wanted to increase communication in order to avoid this in the revamped project. There were three points in the strategy document specifically about this:
- Create a single shared definition of “done” across all delivery teams.
- Share a single sprint plan.
- Create a shared understanding of the application through the collaborative creation of specifications.
The shared definition of “done” was immediately adopted and stood for the duration of the project. It is still posted on their wall, as far as I know. The shared sprint plan was adopted after a few months, as mentioned above. Collaborative creation of specifications never really took off, unfortunately. We did try example mapping and it was generally well received, but the pace of development was too great. Perhaps the stories were too small to work well with it.
Another aspect of collaboration that was described in the test plan was more technical. I proposed that test coverage would be planned with developers in order to reduce duplication and increase understanding of who was to automate what. Unit and integration testing would be used to provide depth of coverage, and system and UI-driven testing would offer breadth of coverage without significant depth. In addition, whenever a system or UI test found a bug, I proposed that a developer test would be written to detect it before resolving it.
Unfortunately the project did not benefit from this sort of collaboration. Occasional code reviews turned up large numbers of system tests that duplicated integration tests; developer and tester conversations rarely allowed us to avoid duplications. This was probably due to the large gap in experience. The application developers were generally very experienced and most of the test developers had only just learned basic development skills, leading to conversations that were accidentally one-way. The testers simply listened and agreed, and did not offer sufficient feedback or ask clarifying questions. Later in the project, developers found more spare time due to lessening workload, so they added more integration tests, which often made existing system tests redundant. There was also a series of long-running automated UI tests created to reproduce some business processes, and these similarly supplanted older UI tests. These were all good tests to add: I believe they should have been added earlier, at development time. This is another point that should have gone into the test plan: writing tests for continuous integration is best done by, or at least shared with, experienced developers, so testers can focus on test design and exploratory testing.
A final aspect of collaboration proposed in the test plan was continuous reporting. At the time we joined the project, the test manager was spending over half a day each week creating and emailing reports. I proposed that all reporting be done automatically, in whatever format was requested. Once Jenkins was driving our continuous integration, I expanded it with a build monitor view, opened a few more browser windows pointing at things like burndowns and deployment histories, and put them all on a screen easily visible to corridor traffic. I was worried that stakeholders would continue to want emailed reports, but the ad hoc dashboard seems to have met everyone’s needs. Between that and the occasional product demonstration, we fully met their reporting requirements without emailing a single report.
Finally, the test strategy and plan documents offered approaches to improve how testing was carried out. This was the area where the plan and reality diverged the most, due mostly to the decision to drop the old project and start in green fields. My initial recommendations were:
- Make use of technical skills within the existing team.
- Don’t look for new technologies where existing ones are sufficient.
- Don’t look to write new tests where existing tests are sufficient.
- Remove technologies that are not currently adding value.
- Review existing test suites for too deep coverage (for example, unit testing through the user interface).
- Review broken tests and strongly consider deleting them.
- Don’t review individual working tests.
As we settled into daily project life, we learned that none of the existing automated tests were being run regularly. When they were run, there was usually a significant amount of work required to get them back to working order. Worse, many of the tests were not truly automated: they would wait at various points for testers to manually update application state, perhaps by running SQL scripts. None of this was acceptable for continuous integration: we were unable to use any existing tests or test assets.
Therefore, the technologies and patterns we used were all selected without any reference to existing code. We started from scratch, even before the decision to bin the old product was made. In my experience, starting fresh always produces cleaner, more cohesive results, and is usually faster than updating existing assets created by another team. In hindsight it was definitely wrong to make those suggestions, no matter how well received they were by the customer: if we weren’t prepared to review each existing test, we should have deleted them and replaced them with tests we had confidence in. Fortunately, project decisions made in the first couple of months meant we didn’t have to try to maintain those old tests on those (barely) sufficient technologies, we were able to create new tests on ideal foundations.
This project has finished and the product has been successfully released. Everyone involved is happy with the result, and I hope that most have learned a few things on the journey. I know that I’ll still be asked to write strategies and plans that will be obsolete before the project is half done, but with luck, I’ll also get to work on projects where the strategy is to make it safe to fail and the plan is to improve, improve, improve.