If you halve the change size, your deployment frequency goes up twice, and now in order to keep the same change failure rate % as before you have to have twice as many failures per change.
Automated tests as a safety net are most valuable in codebases with high risk when making a change. Those are conflated, coupled codebases where:
1) the average rate of change per element (method/class) is high and
2) the risk of introducing problems with a change is high (too much coupling).
The latter is a consequence of the former, and codebases with long methods and classes inherently exhibit these characteristics.
A: “Teams doing XP produce tremendously fewer bugs and have way less rework compared to traditional teams you mostly see in our industry”
B: “That’s great, but at what cost?”
and exponentially increasing the risk of something going wrong as you add more functionality is pretty extreme programming compared to TDD if you ask me.
with async code reviews you’ll often get way less feedback and less ability to build in quality than with continuous code reviews (pair/mob programming), but at least you’ll also choke the flow and delay delivery.
Here’s an interesting phenomenon when it comes to the concept of rework in knowledge work.
You can only minimize rework, but you cannot eliminate it. If you try to eliminate it, you’ll maximize it.