Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline. If you couldn't add a service level test that got your pr coverage, then you were referred to yagni and told to stop doing stuff that wasn't part of your task. I was ok with that, it worked well, and the tests were both easier and faster to write. If the services had been too large maybe it would have fallen apart?

I have also worked on codebases where there were only tests at the level of collections of services. Those took too long to run to be useful. I want to push and see if I broke anything, not wait hours and hours. If a full system level test could complete in a few minutes I think I would be fine with that too. The key is checking your coverage to know you are testing stuff, and to write tests that matter. Coverage can be an antimetric.



> I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline.

Sounds ideal to me. Add testing where it is cheap enough to justify, and maybe just a little more than you think you really need because they pay off so often.

If your mocks and fixtures get too big you might not be testing the code in question meaningfully.

Coverage and test quality/quantity need to scale with the associated risk. Perhaps "Flight planner for Air Force Weather" gets more testing than "Video game User Interface" and that's ok. Even in gaming, the engine team should have more tests than the game teams.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: