madhadron

We all know that we should test our code.

We know that we should have high test coverage. We've been told about having tests that will catch anything that breaks when we refactor our code. We've been told that we should do test driven development, that we need unit tests and integration tests and acceptance tests, and that we need to run all these tests automatically.

But then we start writing code. We feel out what we need to write, and by the time we know what we're doing we have half a codebase, and test driven development seems like a pipe dream. We've got some tests to make sure that hunks of our code do what we expect in some cases. Then, if you have time, it's the slog of figuring out what tests to write, generating all the test data and figuring out what to assert for each case.

It's slow, it's frustrating, and it can feel like groping in the dark.

And then a dependency makes your tests only work on your machine.
…or you find yourself putting sleep statements everywhere.
…or tests start failing because someone changed a string to lowercase.

Then you have to test errors modes in a concurrent system, and you start tearing your hair out.

And most of the advice isn't that helpful…

When you start looking for help on this, most of what you find is information about continuous integration, or arguments about unit tests versus integration tests, or whether test driven development is the One True Way™. And all you want is to know is how to not have to spend two hours writing boilerplate to test ten lines of code.

Unfortunately, unless you're lucky enough to have a mentor to show you how to test effectively with a lot less suffering, it can take years of reading and experimenting to sort out the wheat from the chaff. Worse, some of the most effective techniques are in obscure academic literature, fringe programming communities, or outside of computer science entirely.

It doesn't have to be this painful.

Picking test cases can take only a few minutes if you have a few heuristics, and there are techniques that make generating test data vastly simpler. And a few coding guidelines shrink the number of tests you have to write dramatically. And testing concurrent systems is never easy, but it can be relatively straightforward.

I've distilled what I've learned over the past twenty years of deploying software, combined it with material from my training as a theoretical physicist, techniques from my work as a statistician, and techniques dredged the academic literature. The result is ten lessons with concrete examples, exercises, and working code of all the examples.

You can expect to learn:

How to quickly select test cases.
Once you know a set of guidelines for generating test cases for single parameters and how to combine them, creating a test plan goes from groping slowly through it to a series of straightforward steps. A few more techniques let you shrink the resulting test plan if it's unmanageably large. Stop choosing between writing tests and moving fast.
How to write tests more quickly.
Most tests have the same form: generate test data, use it to exercise a function, and assert that the result is correct. It doesn't have to be clever, and a few templates and tricks remove most of the boilerplate and gotchas from this kind of code.
How to generate test data quickly.
Learn how to use random generation and techniques from property based testing to mitigate the drudgery of writing nontrivial test data, including how to generate things like Unicode text and complex data types.
How to write robust assertions.
It's so easy just to check if a value is returned, not if it's correct, especially for complicated values. A few tricks make this much easier to avoid. And for truly intricate situations, we can assert invariants of a system, which are more robust to small changes than checking detailed values. Thinking in invariants is unfamiliar to most programmers, so we provide a gentle introduction to how to think this way and use them for tests.
How to test live systems and concurrency.
Deep testing of concurrent systems like Jepsen are subtle nad complicated, but most of the concurrent situations we test—timeouts, retries, limited sequences of events—yield to a set of much simpler techniques.

Learning how to test your code efficiently and effectively pays off fast. How many functions or methods do you write in the course of a week? How much faster would you be if writing those tests took half as long?

Lessons in the course

  1. Getting started
  2. Formal methods are your friend
  3. Choosing test conditions
  4. Multiparameter functions
  5. Testing recursive problems
  6. Structs and records
  7. More complicated boundaries
  8. “Hidden” parameters
  9. Setting up implicit state
  10. Randomized data
  11. Where next…

Before you buy…

CoVID-19 Note: We're in a pandemic. A lot of people need work, and polishing your skills is a good step. While CoVID-19 continues to ravage our world, use the code covid to purchase the course for $9.99.

Loading...