A Boatload Of Red Flags In Testing

Rowan Powell
Dunelm Technology
Published in
4 min readFeb 21, 2024

--

If you’re anything like me, you first got into automated testing by poking around some examples and just … figuring it out. While these tests Work™ they often don’t do the best job or can become very difficult to maintain down the line, so let’s talk about lessons I learned along the way and some habits it’s best to avoid.

When I see pausing or waiting in tests I wince, because I know exactly the kind of frustrated re-running and debugging of a test suite that led to it sitting there. While just waiting for a process to finish before continuing often works in the moment it means the test is slower than it could be or even worse means the test will break entirely if the process you’re testing takes longer than normal (Even if the behaviour is fine!). Some of the most common reasons I see engineers adding in these delays include;

  • Waiting for Page Load
  • Button States
  • On-Demand Data Load
  • Async process kicked off

You’ll find your tests are much more robust if you await key elements becoming visible such as a page title or data row and monitoring the “disabled” attribute of the button rather than guesstimating how long the API will take. Some of these ‘problems’ we can sidestep entirely by encouraging use of optimistic UI where appropriate — which feels good for customers too!

By adding timeouts to these interactions you can also help enforce good page performance at the test level, key metrics I spoke about before like interaction to next paint can be fostered by keeping the timeouts on your tests tight (Though admittedly this does risk making them more flakey, so decide where your priorities are).

Similar to timeouts that are ‘about right’ it’s not uncommon to see numbers or IDs crop up repeatedly with no real context as to where they came from — do your best to keep these in a constant that’s shared around the tests and test cases it’s used in. Keeping it in one place and giving it a meaningful name not only helps you and other engineers read the code much faster, but it makes it easier to update the tests later.

Magic number’s cousin is long selector chains and causes similar problems. Take this long chain I found in VSCode’s IDE source code for example:

.suggest-details>.monaco-scrollable-element>.body>.docs.markdown-docs>span:not(:empty){padding:4px 5px}.monaco-editor

While this might work for covering a very specific edge case in styling the page, when testing we don’t want to rely on so many things being nested in each other ‘just so’ — we’re not testing the DOM layout so try to keep your selectors focused on exactly what it is you’re looking for, such as the text on the button.

//button[text()=’sign in’]

Selectors like this are much less likely to break and need maintenance when the layout of the page changes, but the functionality stays the same.

This last one is not quite as simple as “just don’t do it”, as striking the balance can be a bit trickier — repeated test steps, both writing them out repeatedly but also using them repeatedly.

Let’s take the Dunelm website checkout as an example, I might want to test out the behaviour of buying my cart as well as removing items from it or starting the checkout process and bailing out part way. I often see these test cases written out something like this:

it("Should allow me to check out"){
logIn()
addItemToBasket()
goToBasket()
checkoutButton.click()
buyButton.click()
expect(itemsToHaveBeenBought).toBeTruthy()
}
it("Should allow me to cancel"){
logIn()
addItemToBasket()
goToBasket()
backButton.click()
expect(itemsToHaveBeenBought).toBeFalsy()
}
it("Should allow me to abort halfway"){
logIn()
addItemToBasket()
goToBasket()
checkoutButton.click()
cancelButton.click()
expect(itemsToHaveBeenBought).toBeFalsy()
}

While the details of those steps have been abstracted to something resembling BehaviourDrivenDevelopment Testing, if we need to change the details of getting to the basket we still have to fix this in multiple places. We have two choices; abstract those repeated steps into a larger “meta step” if it’s going to be reused across the suite, or add a BeforeEach step like so:

beforeEach(() => {
logIn()
addItemToBasket()
goToBasket()
})
it("Should allow me to check out"){
checkoutButton.click()
buyButton.click()
expect(itemsToHaveBeenBought).toBeTruthy()
}
it("Should allow me to cancel"){
backButton.click()
expect(itemsToHaveBeenBought).toBeFalsy()
}
it("Should allow me to abort halfway"){
checkoutButton.click()
cancelButton.click()
expect(itemsToHaveBeenBought).toBeFalsy()
}

All of these test cases are still going to re-test navigating the website however! Logging in, adding an item to the basket and waiting for pages to load — this adds up to a lot of wasted test time! You should also consider the fact that if adding items to the basket is broken we’re going to fail three different tests for one feature! If you spot a test suite like this, it might be worth thinking about if the data for the test can be set up another way, such as via an API or browser cookie.

--

--

I’m Rowan, a tech lead at Dunelm, writing stories about the intersection of Engineering, Automation and Psychology.