A Deep Dive Into Types of Tests 🌊

Dec 09, 2022 3:01 pm

Happy Friday!

I'm recovering from a minor surgery this week and while I thought I'd wind up being really productive with my prescription of bed rest, no such thing happened.

Still, I was able to work from home and that is a wonderful thing for me.

As it turns out, I a development team was reviewing their work with me and it kind of made me want to attempt to put some organization to my thoughts around what I was seeing. You see, this team was showing me how they tested their work and I was finding many issues with the approaches they used and they aren't unique.

So in this email I'm going to describe a few of them.

Let's start with the two major categories that people tend to talk about a lot—happy path and negative path tests.

Happy path tests are what most teams default to writing when they're not particularly skilled or motivated to write tests. These tests are there to say that they can prove that the case they want works. It doesn't prove anything else. So, for example, if there were a feature to login they'd have a test that confirmed when you click a button, it logs you in.

Happy path tests are important but alone are nowhere near adequate. They don't protect you from anything that goes wrong, strange, or side effects. That login example would not include things like your username and password matching or not. It just includes you got logged in. Happy path tests are confirmation bias in code.

Negative tests, on the other hand, are all the tests that declare what your system does under unexpected or undesired scenarios. This is where most of the time and attention need to be put, but this takes skill and motivation. What's interesting here, though, is there are lots of different sub-categories you could break down.

  • Guard tests - These prevent things like nulls, undefined, or bad inputs. Can also be covered by raw assertions
  • Inductive Tests - These aren't good, but many people test with a theory that if they can test X, then Y must also be good. Y is typically the behavior you actually wanted to test.
  • Reverse Negative - Another problematic version is to write a negative test as a happy path test. This shows up as a mismatch in the test description and assertion.
  • Failed Infrastructure - Rare to see, but important. Tests things like the database didn't respond.
  • Unexpected Behavior - Users do weird things, these tests establish exactly what the systems should do when it happens, because it will.
  • System Inconsistency - These tests handles things such as duplicated items when there shouldn't be. Non-unique items when they were supposed to be, or getting multiples when one was expected.
  • System Launch Tests - Sounds silly, but when a system first starts it needs to be in a specific condition. These tests prove that is true.
  • Stupid Programmer Tricks - Dates, Arrays, Currency, and caches are things that developers mess up consistently.
  • Legal/Compliance Tests - Pay extra attention here. There are plenty of laws and compliance issues that tests can help cement in place.

This list is mostly me getting things out of my head that I see a lot and working with teams on a lot. It may seem like a lot, but typically getting a good mix of these tests only takes minutes to do once you are aware of them and can eliminate so many sources of issues that it more than pays for itself in reclaimed time that your team isn't spending on bugs or debugging and lowers support cost and time since the software and system are far more stable and resilient.

Let me know what you think of my framing of different tests. I always love hearing from you!