Automation: What, Why, How & the cost if you don’t

What is Automation?

Automation” – Wikipedia states, “is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test Automation can automate some repetitive but necessary tasks in a formalised testing process already in place or perform additional testing that would be difficult to do manually. Test Automation is critical for continuous delivery and continuous testing”.

I would add to this, that Test Automation is software crafted using the same source-code management tools, build tools, and design principles as the production software being targeted by the tests, and often using the same programming languages. Test Automation should, therefore, be considered with the same mindset as production development. I would also strongly disagree that it is always ‘separate from the software being tested’ – as automation includes unit tests, integration tests, API tests, data config, stand-up/tear-down, build-deployment, and many more examples.

Why do we need Automation?

Not having automation (even if we restrict our gaze to ‘Test Automation’ alone), leads to some very bad outcomes:

  • It is harder to increase the quality of an existing product
  • Unintentional changes continue unnoticed
  • The project state is very hard to gauge
  • Feedback opportunities are lost
  • Testing is more likely to be cut
  • Testing outputs (defects) are more frequent and more numerous, disrupting Developers, forcing them to context-switch back to the old work
  • Testers will always be at least 1 sprint behind Development

A team that integrates automation into its work, will be a very different beast:

  • Good engineering practices, like pair-programming, code-reviews, automated unit-testing, refactoring with safety, CI, CD etc
  • Hand off between Dev and Test is so minimal it’s hardly noticeable, so strong is the collaboration between them
  • Faster feedback when things break, or aren’t working yet (features in Dev can be fixed while being worked upon)
  • Automation at different levels – unit tests, integration tests, service-layer tests, UI-tests, performance tests, automated builds, automated releases, etc

Common myths about Test Automation

  • automated tests are written only once the feature to be tested is stable” – this is a very limited outlook. You will always be struggling because there is never enough time to automate features this way during a sprint. Yes, existing features should be under automated tests, but the fact the feature is ‘out there’ without an automated test points to deficiencies in the initial approach, and is not an Agile strategy. It doesn’t work with DevOps either. It is hard to write automated tests up-front, but studies from companies like Microsoft, show it is worth the effort.
  • automation is the process of automating existing test cases” – commonly, an SDET is handed an Excel spreadsheet with hundreds of manual test cases within it, and tasked with translating them into coded tests. This leads to the request for “100% automation”, which is a meaningless metric. It has no value, because the test cases themselves are almost certainly obsolete. No-one ever asks if the test cases themselves were of any value, or are still relevant. This process also does nothing to help the Developers.
  • it can be achieved via the use of code-less tools” – managers under the cosh of budget constraints love the sound of this – but in my experience this is a fallacy. You always end up supplementing the tool with your own code, while never having access to theirs. This quickly becomes a hindrance rather than a help. If it’s too good to be true…. Better to create your own suites using the most appropriate open-source libraries and frameworks available.
  • automation is an activity carried out by the Test Team” – in the old days of Waterfall projects and official hand-offs, this was the case. To this day, this misconception shapes the teams and places the automation firmly in the grasp of the QA’s. In reality, automation is the responsibility of the entire team. Developers in Agile projects write unit tests and integration tests where appropriate for new pieces of code. They write them for old code in order to safely refactor it. Developers & SDET’s write API tests, UI Tests, non-functional tests such as Load Tests. Sys Admins write automated environment creation tools. DevOps engineers write automated tests for configuration for environments, and so on….
  • It’s always a separate codebase to the software being tested” – another throwback to the bad old days of Waterfall projects. Automation, if done at all, was done by a separate team, or even a separate company. The likelihood that the separate codebase containing the tests will keep in synch with the production code is virtually nil. It is always preferable that as many of the tests as possible live inside or beside the production code and are executed every time a build is created, for instance, in a pipeline.

How to leverage Automation

When I think of automation, I immediately think of BDD – Behaviour Driven Development. A lot of people conflate BDD with testing, but this is wrong. BDD is a “whole team” discipline, that can be mapped to the core Agile Principles.

BDD is not about testing; the core of BDD is the conversations…” – Dan North

Automation plays a key role in BDD, at all stages of the SDLC, starting with the output from team discussions.

  • Example Mapping – features and business rules are mapped to criteria during 3 amigo sessions, which are held whenever they are needed
  • Executable Specifications – automation is not about test cases; it should be used to guide the development, firstly with ATDD (Acceptance Test Driven Development) at the high level, using Conditions for Satisfaction / Acceptance Criteria. Then with TDD (Test Driven Development) at the lower, fine-grained, level. TDD is really a code design methodology, not a testing methodology.
  • Declarative – not imperative… i.e., use business-speak, not techy-speak
  • F.I.R.S.T – tests should be: Fast. Independent. Repeatable. Self-Validating. Timely.
  • Safety Net – code needs refactoring or it will rot. Without tests in the code, this is not only difficult, but dangerous. With tests in place, you can safely refactor the code and the tests will tell you immediately if you have unintentionally changed the behaviour.
  • Rinse and repeat – talk often about features until they meet the agreed criteria.
ATDD

Ubiquitous Language – Domain Specific

  • Business Speak – The DSL (Domain Specific Language) of the business should permeate the higher levels of the automation. In other words, all tests should read as pseudo-English, and wherever possible should use business terminology. The Acceptance Criteria agreed upon by the team should form the basis of the executable specifications. Very often, we capture these as:
    • Given / When / Then
    • As a <role> I want <feature> so that I can <goal>
  • Abstraction – All the complexity of the inner-workings of the tests should be abstracted away, hidden, so that tests can be written more quickly.

e.g. the following 3 examples are, an end-to-end test, an API test scenario, and a unit test, respectively.

A Screenplay Pattern example of putting business requirements into code. The nuts-and-bolts of the test code are abstracted away.
This is a coded version of the criteria that came out of a 3 amigos session. The code that drives it is elsewhere.
Even at the fine-grained, integrated unit test level like above, we can still structure our tests along the same “Given / When / Then” lines to make them more readable. It’s just that we are testing smaller pieces of the puzzle.

How teams integrate Automation into their work

  • Engagement between business folks and Developers – to identify the key examples and Acceptance Criteria before work starts, and discuss with Developers the best way to automate these Acceptance Criteria. It might be a UI test, it might be a unit test, or some hybrid test; there are many choices.
  • Avoid big-bang design – this leads to hand-off to Dev and Test
  • Involve every team in the Example-Mapping – a representative from each function should be present to take part in the criteria and design discussions. Part of the design of the feature will involve making it testable, so questions must be asked about how we go about making that happen.
  • Involve every team in the planning – again, a representative from each function should take part in the planning and refinement of the feature.
  • Automate your requirements – i.e. don’t automate your test cases. Automate the agreed upon specification / criteria in order to have a common source of truth that will help the entire team deliver faster.
  • Make your automated requirements part of your definition of done – how else do we know we are done?
  • Rethink automation – it’s not about automating manual test cases; it’s a way of validating what your application should do, not simply checking what it does.
  • Collaboration tool – automation used correctly is a collaboration tool, not just a regression checker.
  • Drive the team forward – as well as helping the team collaborate, automation should help to make their work easier and more efficient.

The cost of cutting Automation

The Project Management Institute came up with the “Iron Triangle” to describe the main options (and effects) available in a software project.

Iron Triangle
  1. reduce quality – sadly a ‘go-to’ in the industry. E.g., “let’s add automated tests later”. It’s short-sighted, and will haunt you later in the guise of tech-debt, reputational damage, loss of revenue, and even legal action. It’s even worse when you consider that “Quality” isn’t even one of the triangle’s sides, so isn’t one of the options!
  2. add resources – Fred Brooks wrote in “The Mythical Man Month” that adding resources to an already late project will only make it more late. Think of the confusion, training, equipment setup, the likelihood of more defects etc.
  3. Extend the schedule – This is favourable to Developers and Testers, but not so much for the business. Imagine the promises made, the synch-up with expensive marketing campaigns….
  4. Change the scope – Disappointing, especially to the Product Owner and Customer. However, the team would have been working on high priorities first, and these are not up for grabs, so changing scope shouldn’t hurt too badly.

The Test Pyramid – an old idea, still useful as a guide

If quality is sacrificed, as alluded to above, the situation on the left of the image below prevails. Releases are dependent upon “late to the party” and slow mass-inspection manual test sessions. There is also a dependence on brittle UI tests.

Ideally, new features should have accompanying tests written into the code, as the feature is being created. The foundation of automation is unit tests, which will organically grow as more code is added, expanding the safety net that allows for even faster changes in the future. They also verify the product continuously, given us a constant reading on its state.

Working with legacy code, however, as any Developer will tell you, makes it hard to write unit tests after the fact. It can be scary to touch someone else’s code, especially if they are no longer in the company. Some code can make you stare at the screen unable to move because you simply can’t fathom out what it does! Eventually however, the old code will need to be refactored, and this is the time to add a test or two, to at least confirm your thoughts on what it does, and also to give you the beginnings of a safety-net to start refactoring it. Once the code is refactored, more tests can be added. The next Developer to touch it will have an easier time of it.

Test Pyramid
Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s