Thoughts on testing
Automated testing is an integral part of continuous delivery, and testing is a crucial tool that software engineers have in their toolboxes. Tests serve two main purposes: help while writing code, and prevent regressions. When we write them, they define the expected result, and on the other end, automated testing is the only reliable way we have to make sure that what is shipped is coherent. As Uncle Bob says (Robert C. Martin), it is just like double-entry bookkeeping for accountants: would you hire one that cannot, at all time, provably reconcile what comes in and what goes out?
There’s no consensus around the use of TDD, but it is my go-to when it comes to methodology, usually in a bottom-up approach. I guess this is more of a habit than a strong conviction, I also like starting high level with mocks (stubs). My point of view is that TDD is a simple and powerful way of integrating tests into a programmer’s workflow, but it can feel cumbersome and hinder creativity. Consequently, I sometimes allow myself to deviate from its strict rules. Sometimes I write tests after writing my code: when I know where I am heading, and that the path to get there is simple, I enjoy just freely finishing a coherent unit of code, and only then completing the tests for it while it is still fresh in my mind. It allows me to stay focused on the bigger picture and I have found that I can go faster this way. I believe this does not heavily contradict TDD principles, as long as it’s on a short period and nothing gets committed in the interim. But when I implement something that requires a lot of attention or that is error-prone, defining the result before writing code is always a powerful ally. Yet experience shows that it is often impossible to write upfront tests to cover all pitfalls and possible inputs. This is why I tend to go back and forth between the code and its tests to make sure I cover all edge cases discovered while working on the code.
I also think it is important to keep security in mind when writing tests: untrusted inputs should be validated, and error paths tested, as well as edge cases.
Testing does not stop at unit testing though, and thankfully modern languages enable us when it comes to race condition detection or fuzzing (like in Go, which has this directly in the toolchain). Testing can also mean benchmarking, for scale or comparing different approaches, which again modern languages allow very easily, and this is a very good tool for optimizing critical parts, along with profiling.