It’s been a while since I wrote the first 2 parts to this blog piece. During that time, I have seen and read various posts and articles from the testing community that have the stance that BDD is all about testing, or at least they muddy the waters by using phrases like “BDD Testing”, and “BDD Test Script”. Much of the time, these tests are created after the development of the feature and therefore the advantages of BDD have been lost …. the use of Gherkin syntax and automation is not enough for us to say we have a BDD approach.
If you have read part 1 and part 2 of this blog piece, you’ll know my thoughts and feelings on that particular stance and phraseology “BDD Testing”. Dan North himself is quoted as saying “BDD is not about testing”… In fact, he encouraged testers to solve problems that BDD was not intended to solve, such as test coverage.
“The core of BDD is the conversations….” – Dan North.
You can’t shift testing left, or for that matter test early, if you have the mindset that BDD is all about testing; that it is the remit of testers; that the frameworks are meant purely for testing.
Take the example I used in part 2 – JBehave. I could have used Cucumber to make the point, it doesn’t matter which. They are not testing tools. The idea behind them is to facilitate Behaviour Driven Development. They are collaboration tools, they are for guidance of the development process. As I mentioned before in part 2, you want to end up with executable specifications not just glossy automation tests in which only one function of the team takes an active interest.
I’ve been following the posts from John Ferguson Smart on Linked In. He makes the point – more eloquently than I can – that you are not trying to build an automation framework, you are trying to build a collaboration framework. I’ve tried to show in this series that whatever tool or framework you use, communication and collaboration are the keys and you have to approach it as a team right from the beginning of a feature’s lifecycle. The team needs to be organised around collaboration, otherwise you also end up with lots of noise in the form of feedback but will lack the ability to actually action that feedback.
The testers focus in this process is to get involved in conversations with the business and developers, as pointed out in part 2. Our job is to flesh out false assumptions and gaps in the thinking. Questions such as “Who”, “What”, “Where” and “Why”… should be on everyone’s lips but the testers mindset is arguably geared towards these kinds of questions. What I’ve found extremely useful is to stop thinking of the resultant Gherkin style scenarios as test-cases; they aren’t. They are the guides for development to ensure the desired behaviour is captured and that the acceptance criteria mirrors the business value we want to deliver. In other words, an executable specification. When I think of the phrase ‘test-case’ I imagine a much more imperative script, and these still have their place of course, but they are not synonymous with executable specifications.
Another aspect of BDD is outside-in-development, whereby the developer & tester will use the captured conversations to create a test for the feature that can’t possibly pass yet (the feature isn’t implemented yet), then write the feature implementation by again using the conversations that have been captured as their guide. Once the test passes, we know the feature has been implemented as we intended. The next stage is one that utilises that test as a safety-net – the refactor stage.
Outside-in-development is not a replacement of automation, and automation does not replace outside-in-development. You need both. You will also require other forms of testing including exploratory testing and performance testing, and you aren’t going to find the solution to those in BDD.
There is an ease of feature production once the necessary team players are on board with the whole collaboration piece in general, and the BDD methodology in particular, that for me brings with it a higher level of enjoyment in the work. In stark contrast, the bad practice of trying to play catch-up in automation or attempting to keep pace and quality while separating key functions of the team, is a non-starter. The worst experiences I’ve had during my career as a QA were the ones where a team/company thought they had things sorted simply because they had an automation framework – only to discover that developers weren’t being spoken to much by the business, and the testers weren’t being spoken to much by the developers. You end up with the classic ‘bottleneck’ situation of automation that’s out of date, features that aren’t properly tested and often delayed, and/or a mismatch between the delivered feature and the users expectations.
To collaborate, we need to communicate. Lots.
Many thanks for reading!
© Copyright 2022 Cognito Sqaure Ltd
Like what you see? Please support us with a donation.
Support Cognito Square Ltd with a one-off donation in order to help us continue to publish helpful QA Automation and Agile Project Delivery Articles.