I like writing tests. I like having tests written. I like thinking about writing tests, making them better, making my code better, making myself better at writing tests. It wasn't always a smooth learning process, but I'm better off having gone through it. I feel comfortable writing all kinds of automated tests, and I'm not sure I could get here without a few bumps in the road.
I've been talking a lot about tests lately, so I figured I would write down some thoughts.
Unit, integration, end-to-end, regression, contract, smoke, fuzz... so many types of tests, so much disagreement over definitions. What's a unit? Is it always a single function, or could it be a collection of functions that form a single unit of functionality? What's the point at which integration becomes end-to-end? You might have clear definitions in your mind, but in practice I've found a lot of overlap between a lot of these categories.
Guillermo Rauch has a now-famous tweet:
Write tests. Not too many. Mostly integration.
This resonates with me, but of course the thread is overrun with comments looking for a strict definition of "integration tests."
Write the tests that make sense for the task. If you're writing some heavy logic, write some unit tests around it (and write that code so that it can be unit tested). That code probably doesn't live in isolation, so have some tests that make sure your component parts are wired up properly. You probably don't want to run through every branch of logic by hand every time you make a change, so you might want some tests at different levels. Writing UI? React Testing Library is a great tool. Maybe add Cypress if you're concerned about the overall user flow.
Don't go overboard, just write tests that make you confident your code works.
I used to be much more dogmatic about it, and you know what? It really only got in the way. Write tests to ship code. Optimize for the areas you don't want to manually test again and again. Optimize for change.
I've quoted it before, but heck it's worth quoting again:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence ... If I don't typically make a kind of mistake ... I don't test for it. ... When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Writing new code is a great time to write tests. I often write tests first. Even if it's not strict TDD, the test-first flow is comforting to me: Write a test that calls the function you want to write. Think about the API you want. Write a clear description of what you want to happen. Run the test, watch it fail. Start watch mode, and work on your feature until the test turns green and the feature does exactly what you said it would. Clean it up, keeping the test green.
This makes you think about the feature as a user first. It makes you think about the interface before the implementation. The What instead of the How. I think that's a big part of why people say tests make you write cleaner code.
Do I always write tests first? No - if I'm working on a proof-of-concept where I don't know yet exactly what the functionality should be, I'll be changing and iterating and tests can get in the way. But as soon as a picture starts to form I write some high-level tests that cover the interaction of a few functions. I guess this is somewhere between a unit and integration test, but again I try not to go overboard with categorization.
When you include tests with new code, it signals to anyone reviewing your code that you know it works. They don't have to ask if you made sure it does what it should, the tests speak for themselves (especially with good descriptions).
Making a change in part of a codebase without tests gives me the willies. When I'm in that situation I'll almost always write tests before I start changing things to confirm that the code does what I expect, and then I can rearrange and refactor with reckless abandon.
This is fun for me! What's not fun for me is when I need to make a change to some critical code and I have to manually check that I'm not breaking anything. The first thing I'll do is figure out the behavior I definitely don't want to break, and write a test for that. Only then will I start making changes.
I've already written about not being afraid to change your tests, but ideally changing means evolving them to break less often, to be more robust. My platonic ideal of a test is one that you write, never have to change, will always break when it's supposed to, and will never break when it's not. That's a tall order, but it's a good thing to aim for. With experience you can hone your instincts to know what those tests feel like.
I don't think you should write tests just to get higher coverage numbers. I've come around on this pretty recently - like much of programming, it takes experiences to teach you lessons. Coverage is a goalpost for a lot of teams, in large part because it's easy to track, but if you write coverage for coverage's sake you're likely writing low value code. Probably not doing harm, but not adding much value.
I've worked on teams who maintain many tools used by many engineers. If the tools break at that scale, it causes a lot of friction and probably lost revenue. These are places you want tests, but sometimes tools are written in a time or culture where tests aren't prioritized.
The question is: Should you take time now to write tests just to get coverage?
There's an opportunity cost - you could be working on higher impact projects (which you would write tests for, of course, see above). If the code works, if it rarely changes, if it's been in the wild long enough that it's battle tested, I would think hard about that tradeoff.
The YAGNI principle is widely-known in tech. You Ain't Gonna Need It. Writing tests around code that has worked for a long time and you don't plan on changing is low value. Wait until you're going to make a change, then write the tests, and then make the change (with reckless abandon). Otherwise you're adding code that maybe you ain't gonna need.
Write tests when you write new code. Write tests for changing code. Don't write tests just for coverage.
If you do look at coverage: 1. Look primarily at branch coverage. This tells you which branching logic paths are untested. And 2. Look past the numbers at the actual uncovered code. Look for big missing chunks. Be strategic about it. Modern testing tools like Jest come with great visualization tools. 1
So how do you know? How do you know what tests to write? How do you know when they're needed, what makes a good test or a bad test, when you've written enough? I don't know that there's an easy answer for these questions. What makes a good unit test won't necessarily be the same as what makes a good end-to-end test. Like all programming, it takes some experience to get a feel for what looks right or wrong, what's going to protect you in the long run vs. what's just going to cause extra friction.
One thing you can (and should) do is rely on the people around you with testing experience. Knowing what a good test looks like is pattern recognition, and it takes time to build up, but it's worth it. Your development experience will be better. You'll prove to yourself and anyone looking at your code that it works. You'll prevent future regressions and consider how you're structuring your code.
Tests make me comfortable. They get me into a flow state. They tell me I'm going in the right direction and they let me work quickly without leaving a trail of broken code in my wake. I've seen a number of people have an "aha moment" with tests, where in a flash they feel the benefits. It can take a little while to get there, but when you do you won't want to go back.
1:
Running jest --coverage
generates HTML where you can see exactly what
parts of your code are missing coverage (in
coverage/lcov-report/index.html
)