The QA mindset: designing for reliability

Fastly’s engineering teams are smart and capable — they architect thoughtfully, write elegant code, and work carefully with incredible complexity and scale. So why would they (or anyone) need quality assurance (QA)? QA professionals approach their work with a specific perspective — a way of thinking that is equal parts rigid and flexible. QA creates tests that ensure our platform is reliable, well behaved, and dependable, in turn enabling our customers to provide secure and reliable experiences for their users. Although it’s often the last gate before software is released, if QA thinking is incorporated from the start of the process, reliability can be built into the system — saving time and effort instead of fixing after the fact. In this post, I’ll examine how this mindset works, touching on our approach to QA at Fastly and sharing how you could apply the QA mindset to your organization.

Methodology

The goal of testing any piece of software is to get to a point of reliability for your customers. Reliability means that your project does what it promises, while also behaving reasonably in difficult or unexpected circumstances. Can we ensure that it is reliable before it reaches users? Reliability can be built in advance through attention to four areas of testing concentration.

pillars of reliability

Each of these pillars — correctness, error resilience, performance, and robustness — requires careful and thorough investigation on an ongoing, and ideally automated, basis. I apply this methodology when facing any new project. If I’ve effectively tested in each area of focus I can be confident that the project is ready for release and will behave reliably.

With this model in mind, QA creates suites of tests to fully explore each area. But, how can we write tests that accomplish that goal? This is a two-part process. Start from concrete inputs matched with defined outputs — developing a clear and complete picture of feature behavior. From there, mix together reasonable and even unreasonable scenarios to creatively exercise software to the breaking point.

Part 1: Getting clarity

Consider a request for a new feature for an existing API:

Request: be able to set a variable ‘user’ provided a username
Usernames are strings and must not be the empty string.
Desired functionality:
http://myexample.sample/set?user=username

That looks pretty easy both to understand and code up quickly. If you are a conscientious developer you might even throw in a couple of unit tests, one to ensure that if you send in a username it is correctly added and one testing that an empty string is rejected as a username.Given the functionality requested, here’s a list of questions to ask:

  • What’s the minimum number of characters in a username?

  • What’s the maximum number of characters in a username?

  • What characters are allowed in a username? Are any disallowed?

  • Can I add the same username twice? Is uniqueness enforced?

  • Can a username contain unicode?

  • Are there any protected usernames (e.g., ‘admin’ or ‘user’)?

  • How many requests can be made at once?

  • How many usernames can be stored?

  • Are error messages provided and are they understandable?

Essentially, we want to know exactly what the desired inputs and outputs are. The original request is mostly untestable because it’s vague on what constitutes desired behavior in each area of testing focus. Having a QA mindset means approaching a problem with clarity and making decisions regarding program behavior on purpose instead of as a side effect. Once code is out there in the world the current behavior becomes default and it can be difficult to roll back or replace. Write code that reflects what you want, not what you expect.

Creating a reasonably complete specification for your project also, as a side effect, creates a list of testing targets. Each specification (an input with matched output) should result in either unit or integration-level tests that ensure appropriate outputs for each behavior.

Part 2: Getting creative

Being clear on design specifications and ensuring that they are met will only get you so far — you have to approach testing with an open and creative mind.

When provided with a new feature that requires testing I like to think of it like a brand-new toy — like a shiny new red rubber ball, something that I can bounce and catch and throw. It meets all my expectations of a red rubber ball, but that doesn’t mean that it passes QA (or that it’s ready for our customers).

Does it bounce at night, or will I lose it in the dark? Will it bounce the same tomorrow, or in a month, or a year? Can I use it with other toys I already have, like catcher’s mitts or baseball bats? How many bounces until it breaks completely? Can I cut it in half? If I cut it in half can I glue it back together? Does it bounce on carpet or grass? Is it toxic if I try to eat it?

These are not situations covered by basic functionality specifications, but are all reasonable to consider as they represent real user behaviors. Customers don’t always treat features in ways that we expect or want — but if these situations are possible then they can, and will, happen in the wild.














Red bouncy ballReal test scenarios
Does it work at night?Functionality during service outages?
Will it bounce as high in a year as it does today?Does performance degrade over time?
Can I use it with other toys I already own?Ability to operate with legacy tooling?
How many bounces before it breaks?Max uptime because required shutdown/restart?
Can I cut it in half and glue it back together?Are crashes recoverable?
Can it bounce on carpet or grass?Does it work on under-specced hardware?
Is it toxic if I eat it?Do something unexpected, is it recoverable?

This sort of brainstorming is fundamental to determining a set of integration-level tests that ensure that the end user receives a product that doesn’t disappoint them. The more time you spend thinking about these scenarios the better you will become at seeing them before you even start coding — designing for reliability and responsiveness instead of fixing after the fact.

It’s an urge towards creative destruction that makes for a quality tester. I often hear from developers that I misuse and abuse their projects — but, in the end, we release reliable products.

Finally

Now that you’re thinking like a QA engineer, you’ll insist on clarifying the corners and edge cases of each piece of project functionality, and then you try your hardest to overload, sabotage, or otherwise destroy the final product. Yes, this takes time — but the end result is a reliable product with fewer failures, saving you from maintenance and hotfixes and providing your end users with dependable functionality.

Alice Nodelman
Senior QA Automation Engineer
Published

5 min read

Want to continue the conversation?
Schedule time with an expert
Share this post
Alice Nodelman
Senior QA Automation Engineer

For over a decade, Alice Nodelman has specialized in automated testing of hard-to-test software. She believes that software should keep its promises. She has mocked out online marketplaces, generated giant databases, replicated web browsing, and turned clouds of virtual and real machines into armies of testing drones. Currently a Senior QA Automation Engineer with Fastly, she is lead developer of in-house testing solutions, including image optimization verification and ensuring correctness of certificate management. Someday she will automate everything.

Ready to get started?

Get in touch or create an account.