Standards are a double-edged sword
Subbu Allamaraju, CTO Expedia, https://m.subbu.org/lessons-from-2019-5beb64e63bc7
TL;DR: Standardisation reduces the dimensionality of a problem space thus eliminating possibilities for optimisation.
Standards reduce the number dimensions of a problem space
Consolidation initiatives aim at reducing the number of domains in a problem space in order to make problem handling easier: reduce the number of processes so that on-boarding is easier, reduce the number of products so that a smaller skill set suffices, reduce the number of stakeholders so that decisions can be made faster. Standardisation almost always targets costs and efforts through the appeal of reducing cognitive load: a reduction in the cardinality of classes of concepts should reduce time spent on comprehension by simplifying the problem.
Consolidation projects sometimes fail to deliver, because, in systems thinking terms, they move the wrong leaver. The value proposition of consolidation is the reduction of complexity by reducing the numbers of moving parts and hence the number domains a problem spans. It turns out that there is a correlation between the number of dimensions a problem has and how easy the problem is to solve.
The correlation between problem size and problem dimensions
Let’s say we have a problem space which involves 3 domains/dimensions (eg. cost, complexity, time) of size “10” (no units), thus cost*complexity*time = 10 and lets assume for simplicity that cost=complexity=time=2,15 (because 2,15³=10). Now we’re asked to reduce the problem size by 10%, thus by 1, meaning that the new equation is cost’*complexity’*time’ = 9, which puts the new values at cost’=complexity’=time’ = 2,08. So a 10% problem size reduction can be achieved with a 3,3% improvement of each dimension (for 3 dimensions).
Let’s say our problem had another domain, eg. management, so in total 4 dimensions but was of the same size: cost*complexity*time*management=10, which puts each dimension at 1,78. A 10% reductions means cost’ * complexity’ * time’ * management’ = 9, or each dimension equates to 1,73. So a 10% problem size reduction can be achieved by 2,8% improvement in each dimension. This may seem like a small difference (3,3% for 3 domains vs 2,8% for 4 domains) in absolute terms, but the per-domain improvement required for 4 dimensions is 15% smaller than for 3 dimensions.

The required change of each dimension’s value in order to improve a problem goes drastically down with the number of dimensions:
dimensions | volume | % improvement | % each dimension change |
2 | 100 | 10.00% | 5.13% |
3 | 100 | 10.00% | 3.45% |
4 | 100 | 10.00% | 2.60% |
5 | 100 | 10.00% | 2.09% |
The first intuition behind this finding is that each dimension is not only a source of problems, but also a potential for optimisation. Once dimensions are reduced to a minimum (eg. no skill gap because of a single technology platform) all wiggle room for optimisations (eg. features missing from that single technology platform will be missing in all eternity) is gone, too.
The second intuition is that it is easier to improve each problem domain a little than to improve one problem domain a lot. Eg. it is probably cheaper to improve a system’s performance by tackling three domains at once (adding a caching layer, optimising a query and redefining a use case) rather than to optimise a single domain (buy a lot more hardware).
The fine print
The view on problem dimensions presented here comes with a few basic assumptions.
Problem dimensions are orthogonal; eg. an increase in management effort doesn’t (on its own) incur a cost change in other dimensions such as manufacturing. Any changes in other dimensions would be time-delayed, causally related side effects (eg. manufacturing becomes cheaper because of better management – or it becomes more expensive because of worse management). In reality the effect of a dimension change on other dimensions can be immediate, eg. a change in the management dimensions incurs a direct change in the manufacturing costs because of more reporting – we’re looking at problems where that isn’t the case.
We’re comparing problems of similar size and different dimensions; my intuition is that problems with many dimensions will be, at first, larger than problems with few dimensions in reality – an example to support said intuition is that you wouldn’t invest in a heavily automated CI/CD pipeline for a small, personal side project. However, as we’ve seen, large problem spaces can improve dramatically with only small improvements in each dimension.
Observations from the trenches
Security and operations are frequent drivers for standardisation and consolidation because their responsibilities intersect with most domains. Uniform processes, APIs governance and consolidated infrastructure are easier to audit and run than per-case processes, proprietary interfaces and a zoo of systems and products. A clear driver for security and operations is cost reduction as they are concerned with what is already there. Dev (and business) on the other hand focus on delivering new value, so they want maximum freedom and effectiveness: they want a product roadmap with as few dependencies on others, low friction and the most appropriate technology or product for each case. Those goals don’t necessarily contradict standardisation, but often do because standardisation happened in the past while new products and services are developed in the future and the old assumptions that made standardisation a good choice in the past might not hold any more.
The dilemma what to standardise is essentially about friction: high friction increases the cost of change and supports stability while low friction reduces the cost of change and decreases stability; both cost of change and stability have an ideal value for each organisation at a particular point in time. IT departments sometimes slow down the rate of change on purpose by complicating change management processes in order to be able to keep up with business demands (true story!).
Living with many dimensions
Going back to the initial observation, standardisation aims at reducing problem size through dimension reduction because standardisation builds on the premise that fewer dimensions are easier to comprehend than many dimensions. My counter claim is that a problem with few dimensions requires mastery of all dimensions while a problem with many dimensions requires a basic understanding of most dimensions and perhaps proficiency in a few of those. Eg, imagine that you are the architect of a system which you have to speed up and you live in two parallel universes: one where you control only the software architecture and one where you control the software architecture and the infrastructure. In the first universe you’d have to improve the software architecture in order to speed the system up, in the second you can pick some low hanging fruit and improve the software architecture a bit (eg. by implement caching) and spend some money on the infrastructure (eg. more RAM). Thus, if you can improve only the software architecture you’d have to come up with a really good architecture, while in the second case you can come up with a moderately improved software architecture and a moderately more expensive infrastructure.
The reality is that through various synergies of life, people have already a basic grasp of many domains and individuals complement their skills in teams. It is often easier to assemble a team that covers many dimensions to an OK extent than to assemble a team that excels in few dimensions.
Summary
Problems become hard to manage when they touch different domains, so standardisation efforts aim at eliminating problem domains. However, each domain represents not only a source of trouble but also an opportunity for optimisation. Problems with a low domain count require stronger optimisations in the few domains that they consist of than problems with a high domain count which require only small optimisations in each domain.