We all know that we should test our code.
We know that we should have high test coverage. We've been told about having tests that will catch anything that breaks when we refactor our code. We've been told that we should do test driven development, that we need unit tests and integration tests and acceptance tests, and that we need to run all these tests automatically.
But then we start writing code. We feel out what we need to write, and by the time we know what we're doing we have half a codebase, and test driven development seems like a pipe dream. We've got some tests to make sure that hunks of our code do what we expect in some cases. Then, if you have time, it's the slog of figuring out what tests to write, generating all the test data and figuring out what to assert for each case.
It's slow, it's frustrating, and it can feel like groping in the dark.
And then a dependency makes your tests only work on your machine.
…or you find yourself putting sleep statements everywhere.
…or tests start failing because someone changed a string to lowercase.
Then you have to test errors modes in a concurrent system, and you start tearing your hair out.
And most of the advice isn't that helpful…
When you start looking for help on this, most of what you find is information about continuous integration, or arguments about unit tests versus integration tests, or whether test driven development is the One True Way™. And all you want is to know is how to not have to spend two hours writing boilerplate to test ten lines of code.
Unfortunately, unless you're lucky enough to have a mentor to show you how to test effectively with a lot less suffering, it can take years of reading and experimenting to sort out the wheat from the chaff. Worse, some of the most effective techniques are in obscure academic literature, fringe programming communities, or outside of computer science entirely.
It doesn't have to be this painful.
Picking test cases can take only a few minutes if you have a few heuristics, and there are techniques that make generating test data vastly simpler. And a few coding guidelines shrink the number of tests you have to write dramatically. And testing concurrent systems is never easy, but it can be relatively straightforward.
I'm working on distilling what I've learned over the past twenty years of deploying software, combined it with material from my training as a theoretical physicist, techniques from my work as a statistician, and techniques dredged from a taste for old academic literature.
When it's done you can expect to learn:
- How to quickly select test cases.
- Once you know a set of guidelines for generating test cases for single parameters and how to combine them, creating a test plan goes from groping slowly through it to a series of straightforward steps. A few more techniques let you shrink the resulting test plan if it's unmanageably large. Stop choosing between writing tests and moving fast.
- How to write tests more quickly.
- Most tests have the same form: generate test data, use it to exercise a function, and assert that the result is correct. It doesn't have to be clever, and a few templates and tricks remove most of the boilerplate and gotchas from this kind of code.
- How to generate test data quickly.
- Learn how to use random generation and techniques from property based testing to mitigate the drudgery of writing nontrivial test data, including how to generate things like Unicode text and complex data types.
- How to write robust assertions.
- Making assertions about invariants of a system is more robust to small changes than checking detailed values, but thinking in invariants is unfamiliar to most programmers. Get a gentle introduction to how to think this way and use invariants for tests.
- How to make your code more testable.
- "Testable" code is often thought to be a matter of opinion and taste. Learn the simple math that determines what coding practices are more testable. Handily, the resulting code is usually simpler to work with in general.
- How to test live systems and concurrency.
- Most arguments about unit versus integration testing involve the difference between live, complicated dependencies and isolation. It turns out that some general techniques cover most of the differences.
Subscribe to my mailing list be contacted about the course when it's ready:
I won't spam you and I won't be the reason anyone else does either.