Collaborate with JBehave & Serenity – part 2 of 3

In part 1, on the need for collaboration, I mentioned the idea of shifting testing left; an old idea that has only really gained traction fairly recently. This is the process of involving the entire team, cross-functionally, in the assurance of quality as early as possible – rather than it being solely the testers ‘problem’ at the end of the development cycle, or worse – after the sprint has long since finished.

You can’t keep pace and scale if you have an environment that sounds similar to this:-

  • your testers sit in a bubble desperately trying to keep up with new features being churned out by a separate development team, who have since moved on to new shiny things;
  • SDET’s trying to clear a backlog of already out-of-date automation tickets for features that are already live;
  • BA’s coming to the developers with new features – and an assumed fully-baked solution to go with them;
  • management who want 100% automation without being able to explain what that even means;
  • PO’s/BA’s working on new features without involving testers in the beginning.

The silos need to be broken down. I mentioned before, you need to convince the business stakeholders of the benefits of involving testers up-front, and you need to convince developers that collaborating will speed things up, not slow them down. The key to all of this is communication and trust. Talk to each other, all the time. I personally prefer 3-amigos sessions rather than planning meetings. Firstly, most planning meetings involve too many people and quickly become unfocused. Also, these meetings often go off-subject descending into tedious discussions about the column in which this ticket or that ticket should be; or a discussion on why someone thinks a story is a 3 versus a 5 on the Fibonacci sequence, the result of which is usually a sudden change of opinion to avoid embarrassment. Secondly, they are expensive, since they lock key players in an often low-value meeting for an hour where they aren’t really adding business value. Thirdly, they happen dogmatically once per sprint. That’s normally once every 2 weeks, although I have seen 4 week sprints as well.

In contrast, 3-amigos sessions can happen on a ‘as-needed’ basis which should be more often than your planning meeting; they are focused on reaching a shared understanding on a feature through concrete examples and tend to allow for more engagement and creativity, involving the key roles needed. (Avoid a dogmatic approach of always having a BA/PO, a dev and a tester).

It doesn’t need to be 3 people either – the aim is to ensure the core functions that will work on the feature are present. I’ve had productive 3-amigos sessions with up to 5 people (PO/BA, Dev, Tester, UX, DevOps). I’ve also had 3-amigo sessions that focused on a background framework crucial to the delivery but not user-centric, involving a dev, a tester and a platform engineer. It depends, and should be optimised per feature. It’s also a good idea to outline the concept of the feature prior to these discussions, to allow attendees to prepare. (That doesn’t mean come to the table with a ready-baked solution – this tends to shut down engagement and can lead to awkward back-tracking).

As I mention, it doesn’t mean the business are always the ones requesting a feature. You’re aiming for convergence of understanding through clear examples of what is needed, why it is needed (the value) and an agreement of how each team function can help deliver that value. From those examples comes the acceptance criteria, and from that comes the executable specification. 3-amigos sessions are time-boxed to be short so as to maintain focus (around 30 minutes should suffice). If you’re only half-way through discussing a feature after that time that should be a trigger to split it up. Timing is also important; a just-in-time approach works best where the 3-amigos session happens just before a feature is actively worked on and the shared understanding is fresh in people’s minds – perhaps at most a week before as a starting guide.

It’s a good idea to approach this with someone playing Devil’s Advocate. They play a key role in exposing edge-cases and assumptions.

To show how a team can collaborate on a feature, I have chosen to use JBehave (specifically Serenity JBehave) to illustrate how we can create executable specifications collaboratively, to foster cross-team understanding and introduce the opportunity of scale. It is important for me to note that I’m not suggesting the tooling I’ve chosen is a magic bullet. There are plenty of tools out there that allow teams to build collaboration frameworks; I’ve simply chosen one of them to make the point. It should also be noted that a team could easily use this very tooling and STILL not have a collaboration framework – we must look beyond tooling as I say, and improve our communication.

So, one aspect of JBehave is that it moves the context from a test-based one to a behaviour-based one. It facilitates the process I’ve outlined above. If the concrete examples fleshed-out in 3-amigo sessions result in clear and agreed-upon scenarios, JBehave can be used to capture them. Avoid the common mistake of misusing this, or any other BDD tool, where the scenarios are degraded to literal imperative scripts. Try instead to capture the conversations had during the 3-amigos session as fully-understood examples & rules and maintain that declarative vocabulary after the 3-amigos session when you translate it into the ‘coded’ scenarios.

e.g.

You’ve had a 3-amigos session to discuss an upcoming feature that allows the user to subscribe to a blog on your site. You’ve fleshed out who the actors are, what they need to be able to do, and discussed business rules and concrete examples. A few unanswered questions may have been drawn out as well. For example, the cards below:

Take a look at John Ferguson Smart’s article on feature-mapping for more on this.

Once the session attendees have fleshed out:

  • the business rules
  • the examples that embody those rules
  • the steps taken to enact those examples and …
  • the consequences expected

…it’s a good idea to break away from the 3-amigos session before trying to translate these into an executable specification. I say this because you run the risk of losing the energy in the session if everyone remains behind to attempt this. It should only take 2 of you as a maximum to perform this translation.

So, using the tools I’ve selected for this exercise (again, one among many and used here just to exemplify – although the reason I chose Serenity is that it gives us amazing living documentation reports), we can go ahead and translate our mapping above into our executable specification. I often find myself referring to this as a ‘BDD test’ but I really shouldn’t call it that; as Dan North himself has said: “BDD is not about testing”; we’re automating our BDD scenarios in order to write our software to satisfy them.

e.g. our first scenario:

This lives in the test resources in a file called ‘blog.story’.

We could improve upon this by removing the reference to the ‘widget’ as this points to the implementation (the ‘how’) which is a smell. To better maintain the declarative style we want, we should only interest ourselves with the ‘what’ and remove references to the UI. But for now….

That scenario mapped to Java:

We can improve on this by parameterizing the blog readers’ name, and I’ve made a comment as a note-to-self to do this later. As mentioned above, another improvement might be to remove the reference to the ‘widget’ as this points to the implementation details.

You’ll notice the ‘sentence like’ method names; this is something that is advocated in BDD.

You can see from the step definitions above that they utilise a step library. Our step library might look like this:

Our blog page-object could look like below (I’m ignoring my prejudice against page-objects for this article ;)):

You’ll notice a couple of things about the above code snippet.

Firstly, if we were following the BDD flow and creating our executable specification before the SUT – in fact using the specification to inform the SUT – then I wouldn’t be privy to those xpaths. I would add them in as the SUT progressed through development. This would mean that until then, the spec would fail – this is fine, it’s what you want if you are following the route of using an executable spec when creating the SUT (or at least, this particular feature of the SUT). As a side-note, I would put unique ID’s into the elements to avoid having to use xpaths.

Secondly, you’ll notice that the last method will cause the specification to fail. That’s also fine, because we haven’t implemented that part of the SUT yet, and the reports will show us our progress with passing stages for those parts of the feature we have finished, and the fail for the part yet to be done. More on that when we look at the reports at the bottom of this article.

You may ask – “won’t that mean we have a broken build until this is finished?”. If you work in a continuous deployment fashion, yes it would. But only if the work was merged-in unfinished, and why would you want to do that? This work would be a feature branch, and wouldn’t be raised in a PR until the executable specification is satisfied – i.e. 100% pass rate.

You can run these stories with any of these for example – Apache Ant, Maven, IntelliJIDEA, JUnit, Eclipse etc.

Running JBehave using a JUnit runner via Serenity can be done by extending SerenityStories, so Serenity will run any JBehave tests it finds in the default location, which is the ‘src/test/resources/stories’ folder.

e.g.

I mentioned in part 1 of this series that I was using Maven. So, to run these scenarios I can do the following; clean the project, and run ‘mvn verify’:

There are no tests for the phase ‘test’ shown in the snippet above. We are interested in the phase ‘integration-test’, shown below:

You can see above that Serenity is running our blog story.

Serenity BDD gives us outstanding reporting. Rather than just reporting on what test case was executed (which might not be of interest to the business stakeholders for example), we can see what features have been implemented. This is definitely of interest to the business, who will see the direct link back to their list of deliverables. It’s of interest to developers too, as they can clock progress against the executable specification by seeing exactly what part of the SUT they’ve built that satisfies that specification, and what work is remaining. For testers the reports give a straightforward way of deep-diving into failures, which is useful for them, but also excellent for sharing that information with the business stakeholders to talk about progress.

The reports are easy to use, and are ‘living’ in the sense that each build generates new ones automatically to give a real-time view of where the SUT is in it’s feature journey. For our story, the report looks like this:

Oh dear…. everything failed, right? Well no, it’s not that bad. Remember I mentioned we haven’t finished the scenario, so a failure is expected.

What we can now do, is dive into the report to see what’s been delivered, and what hasn’t, and hope that failure is indeed the one we expect. So, let’s click on the test link under ‘Tests’ in the bottom of the report:

Now, that’s a lot more informative!

We can now see that we have in fact satisfied 83% of our first scenario. Or in other words, more than 80% of the scenario is finished.

What’s more, the report gives you screenshots (this is configurable) enabling the team to see precisely what happened at each step.

You can also see that the failure is the one we expected, as we are returning false from that step until that part of the SUT is ready for us to interact with. You can imagine this might be an email sender client as a lambda that you could interrogate to see if an email was indeed sent to the address given.

If we click on the ‘Requirements’ tab in the report, we see the below information:

We could make the report even more informative by utilising JBehave’s ability to break down the ‘stories’ folder into capabilities, features and stories. To do that, you just need to add sub-folders to the ‘stories’ folder, and place your story file in a ‘capability->feature’ folder structure. Let’s do that and see what extra information we get:

From the above, you can now see extra tabs have appeared – ‘Capabilities’, ‘Features’, and ‘Stories’. Our first tab doesn’t change. But our pre-existing ‘Requirements’ tab has changed. We now see a capability entry called ‘Blog’, which we can click into and we’ll see that it shows us a feature called ‘subscribe’, shown below:

Clicking into the feature ‘Subscribe’ shows us our current stories for that feature, in our case, ‘A blog reader can subscribe to the blog’. Clicking on that story shows us our test, as we saw before.

This structure helps when you have many features for a given capability and many stories under those features. It offers clarity and allows for better sharing of the current situation, feeding back to all members of the team.

Once all the development is complete on our feature, and all the acceptance criteria of our executable specification is satisfied, our report will look something like this:

Here you can more clearly see how the report links back neatly to our examples and our acceptance criteria that we fleshed out in our 3-amigos session.

Summary:

Hopefully, I’ve been able to show how a team can go from separate silos, or even just dysfunctional practices, to a much more collaborative and creative approach to delivering new features.

In Part 3, I’ll give my thoughts on the processes above from my experience of following these practices, and talk a little about what I found with the tools I used.

Many thanks for reading this (rather long) 2nd part of the series! Part 3 will follow soon.

© Copyright 2022 Cognito Square Ltd

Like what you see? Please support us with a donation.

Support Cognito Square Ltd with a one-off donation in order to help us continue to publish helpful QA Automation and Agile Project Delivery Articles.

£5.00

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s