A property of a good design
is the reduction of the median rate of change across code elements (methods/classes).
»is the reduction of the median rate of change across code elements (methods/classes).
»is non-linear in nature, with some exponentiality in it.
»then the refactoring step you plan to do is probably already too big.
»It’s close to impossible to measure good, testable design, but over the years two interesting metrics settled in my mind that are a fairly good indication of the presence of clear, explicit code that speaks the domain:
»that “pressure from product and business made us produce this mess in the code” I need to see how tight your feedback loops are.
»Some of the things I think about when asked how to influence change in teams/orgs as a Principal/Staff engineer:
than working on an isolated, long-lived feature branch, with a delayed review, integration feedback, and a lack of tests.
»If you halve the change size, your deployment frequency goes up twice, and now in order to keep the same change failure rate % as before you have to have twice as many failures per change.
»Automated tests as a safety net are most valuable in codebases with high risk when making a change. Those are conflated, coupled codebases where: 1) the average rate of change per element (method/class) is high and 2) the risk of introducing problems with a change is high (too much coupling). The latter is a consequence of the former, and codebases with long methods and classes inherently exhibit these characteristics.
»by intentionally not adding integration tests.
»