Skip to content

If You're Not Doing Acceptance Test-Driven Development, You're Not Really Agile

BetterCloud

February 25, 2015

8 minute read

feature cover23er
Adam Satterfield, BetterCloud's VP of QA

Adam Satterfield, BetterCloud’s VP of QA

This article is based on a webinar from our VP of QA, Adam Satterfield, hosted by QASymphony. You can watch it here.

Last week, BetterCloud completed a truly epic release: new editions of BetterCloud for Google Apps, a completely new landing page framework within the tool, and—most significantly—a major new solution for Drive compliance and data loss prevention.

And all of that happened on the same day we launched the private beta for an Office 365 version of BetterCloud.

The project was a real monster. It took more than six months and involved 35 people working across five teams. Yet it took exactly the amount of time we thought it would, and the final product exceeded every expectation: After verification, we found only three items that required follow-up, all of which were trivial.

“If someone had told me a year ago that this release would go as smoothly as it did, I would have called them a liar,” says Kevin Skibbe, our VP of engineering.

We owe a big part of that win to acceptance test-driven development. Here’s how we did it—and how you can, too.

What is Acceptance Test-Driven Development, and Why Should You Care?

Agile is great, right? Better transparency, more collaboration, and more predictable delivery, costs, and schedule. So why does the QA process still hurt?

If you’ve spent any time in a development environment, you’ve probably felt the pain points:

  • QA is kept out of the loop while creating the requirements or user stories, and automation tends to be the last group brought up to speed. This means they can only write scripts for regression testing, so they can only help with the next sprint—not the current one.
  • Acceptance tests don’t start until some time after the start of the sprint, leading to a spike in effort while they’re furiously trying to understand the functionality. QA wastes critical testing time figuring out what they’re supposed to be testing in the middle of the sprint.
  • Dev and QA are out of sync. Dev writes their own unit integration tests, QA writes their own acceptance and system tests, and they don’t work together.
  • No one actually knows for sure when QA is done testing.

“Before, there was constant back-and-forth between engineering, QA, and product to make sure that what we built was what they asked for,” says Kevin.

That’s test-driven development—you write unit tests to check individual pieces of code, and integration tests to see how that code works with third-party software.

Then we decided to shift to acceptance test-driven development. While test-driven development tests the code, acceptance test-driven development tests the product. It outlines what the user should be able to do, defines when acceptance criteria are “done,” and relies on the core principles of agile by enabling communication between the business and engineering, and between dev and QA.

“With ATDD, all that back-and-forth is no longer necessary because everyone goes into the process with a shared understanding of what we want to build,” Kevin says.

Sprint Planning and User Stories

Implementing acceptance test-driven development starts with changing the way you plan your sprints. If someone is working a full day, don’t just book that whole day for current sprint work. You need to start working on planning for the next sprint ahead of time.

A 15-day schedule for a development team is outlined, presenting daily tasks including coding, testing, sprint planning, QA sessions, and internal demos. The timeline highlights various stages specifically designated for developers and functional analysts. Key activities are organized sequentially with color-coded segments to differentiate tasks such as code reviews, bug fixes, user story completion, test case preparation, and system integration. Milestones like sprint reviews and demo presentations punctuate the schedule at regular intervals to ensure progress tracking and collaboration among team members.

A typical BetterCloud sprint. Note that “sprint planning” starts on day two.

At BetterCloud, we do three-week sprints, but we start planning for the next sprint as soon as we hit day two of the current sprint. That’s because we need to start defining the user stories, acceptance criteria, and acceptance tests for the next sprint so we know what our backlog looks like.

A critical sprint-planning tool, user stories give you a simple way to explain a business need while leaving development to choose the best technical approach. We use the Mountain Goat Software model, which follows an “As…I want to…so that” format. For example:

As a paying customer,

I want to have the ability to update my billing information,

so that I can keep my subscription current if my information changes.

User stories make it possible to explain the business need, while allowing the flexibility for the functionality to be developed in the most efficient way. They allow your team to discuss the functionality without requiring the business to define detailed requirements.

Acceptance Criteria

If user stories tell you what you have to do, acceptance criteria tell you when you’re done.

They:

  • Define the system behavior.
  • Ensure features work as expected.
  • Help the team gauge the amount of work needed to complete a story.
  • Guide development and QA testing.

Here are some examples:

  • “There should be a navigation element the user can select from to update their billing information.”
  • “The user should have the ability to update either their billing address or their payment methods for the recurring payment.”
  • “A notification should be shown to the user stating that changes will not go into effect until next billing cycle.”

Because they’re written in plain English and are easily understandable by the layperson, these acceptance criteria help improve communication and limit confusion between business and development.

Acceptance Testing

Once you’ve completed the user stories and acceptance criteria, it’s time to design your acceptance tests. The “Given-When-Then” format is a tried and true approach. This approach is taken from Behavior Driven Development, which is a format often used to organize acceptance tests.

  • Given sets the state.
  • When describes the action the user takes on the system.
  • Then describes the system’s reaction to that action.

Example:

Given the customer has navigates to the billing page,

when the customer clicks “Update Billing Information” followed by “Change Address,”

then the customer is presented with a form to enter a new address.

“In other words,” explains VP of QA Adam Satterfield, “The given clause sets the stage for the initial scenario being tested, or the stage the user is at before testing. The when clause describes the action the user performs on the system. And the then clause describes the system’s reaction to that action.”

Workflow

Once you’ve defined your user stories, acceptance criteria, and acceptance tests, creating a workflow is pretty simple, Adam explains.

“Once we’ve put together the user stories, we enter them into a spreadsheet or just put them up on a whiteboard. This gives us an idea of the exact functionality requested by the business,” he says.

Next, we enter the user stories and acceptance criteria into JIRA. Then we create subtasks, which consist of:

  • The acceptance tests.
  • The individual development tasks involved.
  • The automation tasks involved.

“From there, we link that user story in qTest,” Adam says, “and that’s where we write the ‘given-when-then’ tests so we can track them for each of our sprints.”

We use Jira and QTest for our development-tracking workflow.

We use Jira and QTest for our development-tracking workflow.

Use Your Testers’ Knowledge of the Product

No one spends more time in your product than the QA testers, so it makes sense to use them as your subject-matter experts.

“When they’re brought into the process early, and when they help us define the acceptance criteria and user stories, they really shine as SMEs,” Adam says. “It gives them the opportunity to share their knowledge and to help the team understand the many ways a user navigates the system.”

“Before we shifted to acceptance test-driven development, we suffered a spike in effort at the start of every sprint. Because QA wasn’t included early in the process, we had to create our test cases and do the actual testing at the same time. This meant we only had the capacity to run through a few basic assessment criteria.”

“Now, we work with the business and development teams to do that work up front. We’re able to focus on creating more complex test scenarios, like UAT-type tests with the business, or security, performance, or load testing with development.”

Improvements for Automated QA

From there, acceptance test-driven development makes it easy to design your test automation.

“We used Groovy, along with the Spock framework, to write simple, readable test specifications that follow the Given-When-Then format,” says test automation engineer Ryan Cheek. “This way, we can easily translate the code over from our acceptance criteria and back.”

In short, we’re taking the tests—in Given-When-Then format—that were created during sprint planning, and applying them directly to our automation. This means that as soon as the functionality has been delivered, the automation team can start creating their test cases.

“We’ve brought the automation team into the sprint planning so they have the ability at that point to start creating their given-when-then cases,” says Adam. “Then, using this framework, it directly translates into the automated scripts.”

A Groovy script, using both the Jeb and Spock frameworks, to test navigation to our alerts threshold page, as well as correct alert placement on those pages. The “given-and-expect” format translates directly from the “given-when-then” format for user requirements.

A Groovy script, using both the Jeb and Spock frameworks, to test navigation to our alerts threshold page. The “given-and-expect” format translates directly from the “given-when-then” format for user requirements.

Outside-the-Box Automation

“I’ve been hearing since the late 90s that test automation is going to make manual automation go away,” Adam says. “I don’t subscribe to that. I believe you get the most bang for your buck having the manual testers and automation testers working together.”

“If you can break out of your tool, and if you can stop thinking automation should only be used for regression scripts or direct functionality testing, you’re going to gain a lot of efficiency.”

At BetterCloud we’ve seen huge benefits from bring the automation team in early in the sprint planning process. For example, as part of our recent rebranding we had to change “FlashPanel” (the product’s old name) to “BetterCloud” (its new name) throughout the application. The initial estimate was that it would take our manual testers three weeks to comb through it all.

DOMinate, the tool our automation team came up with to speed up manual testing.

DOMinate, the tool our automation team came up with to speed up manual testing.

Instead, our automation team created a Chrome extension, DOMinate, that would comb through a page’s DOM to locate and throw a red box around each occurrence of the search terms. It also provided a context menu item that testers could use to easily upload the results, along with the current URL, to a spreadsheet.

“Creating it took the automation team about four days,” Adam says, “and it ended up reducing the manual test team’s level of effort by 75 percent.”

Summing Up

Sure, it’s not all gravy. Like any worthwhile workflow change, implementing acceptance test-driven development requires getting everyone on your team on board. Usually dev and QA will grasp the benefits pretty quickly; the business team might need some gentle persuasion.

Apart from buy-in, one of the challenges you’ll face is shifting from a classical requirement document to working on user stories—something QASymphony Product Specialist Kevin Dunne explains how to do in this blog post.

In the end, your adoption of acceptance test-driven development is most likely to succeed if you remember these tips:

  • Include the test automation team early and often. This is especially important for an organization that uses both automated and manual testers; because of their development knowledge, the automated testers may understand the situation a little bit differently.
  • Try not to “think inside the tool.” What efficiencies can you gain by trying something different? If you’re in a Windows environment, can you do some scripting with PowerShell? If you’re using Chrome to do your testing, can you use a Chrome extension?
  • Focus on communication and collaboration. As you’re formulating the user stories, get in the room with the business team and the development team. If your team isn’t talking, it’s going to be difficult to implement either agile or acceptance test-driven development.
  • Make sure your manual and automated testers work together. Have them work together on the acceptance criteria and acceptance tests. This adds efficiency, Adam says, “because the given-when-then format works for both automated and manual test cases. This means you only have to write the test cases once, and then you can use them for both types of tests.”

Want to work at a company where biz and dev are best buds, and where you get to spend your time solving actual problems with people who actually care? Well, lucky you—we’re hiring.

Categories