Theory of Constraints
In the book…
In any system, there is a single constraint that acts as a bottleneck to “throughput”. If some input must go through several workers in order to get some output then one of those workers will be the most constrained and least available to do work. Throughput accounting focusses on the constraint as a way to optimise for the most throughput. Any optimisation that excludes the constraint is a waste of resources, as we can only do as much work in a system as the constraint can produce. Constraints move and this can be hard to see. You should choose your constraint to be the most expensive and crucial part of your workflow, so that it’s working at 100% (unlike non-constraints where you want to keep slack so that it doesn’t pile up WIP onto the constraint, increasing cost, overheads, lead-time, etc.) and so that optimisations at the constraint are unlikely to shift the constraint elsewhere, forcing you to shift your entire workflow!
The canonical example is a group of hikers that can only move as fast as the slowest walker. In manufacturing it would be, given A, B, C workers that help to produce some product, if B is the constraint, then optimising A or C’s workflow is not only pointless but a detriment to the business due to the investment and increase in WIP (Work In Process) and Operational Expenses required to optimise A or C. The only way to increase throughput is to optimise at B.
TA makes the point that sometimes the constraint is the policy itself. Although it only suggests “cost accounting” (widely used, more traditional) and other non-TA methodologies as constraints, called “policy constraints”. For example, choosing one product over another based on false assumptions which moves us away from our objective, is termed a policy constraint because it constrains us from achieving our objective. However, you are not to optimise around this constraint as you are for a production constraint. You should make a paradigm shift to TA. This has nice analogues with approaches to software engineering. Recently, there has been a rise in emphasising the 12th Agile principle which focusses on shaping the business around the technology constraint and trying to optimise around IT which prevails throughout the business, as discussed in The Phoenix Project.
A paradigm shift, such as one from cost accounting to TA, comes from someone new to the field with a fresh perspective because they ask all sorts of “dumb” questions that challenge the assumptions on which the paradigm is founded.
Why do paradigm shifts occur? A paradigm is adopted for its effectiveness. It solves problems which then change the reality of a situaton. In circularity, it then faces new problems from which a new paradigm may evolve. People who are successful under a paradigm often defend its merits, merits that they know intimately. But do they continuously challenge its assumptions? Furthermore, widespread adoption of a paradigm often makes it anodine and poorly defined. We see this with movements like Agile and Devops, and organisations cannot merely adopt them, they must undergo a paradigm shift which may incur restructuring and upfront penalties before they pay off, in the fashion of the adage that “things get worse before they get better”.
The circularity of change and improvement in an evolving system is what led us to “continuous improvement”. By continuously changing and evolving the way we work, we are solving new problems and gaining new ones, continously improving the way we work and changing it. Intentionally causing havoc in our work is one way to continuously test and improve our systems to find weak spots, make them more resilient and train ourselves in adapting or “continously improving”.
Furthermore, by being open to change we are more ready to spot our next “Black Swan” situation, where we might have to sharply change our approach and understanding, in other words, make a paradigm shift.
Not in the book…
In software development we are almost always doing the wrong optimisation! Firstly, work that gets prioritised isn’t validated to be important, or rather we aren’t so clear on what the business objective is. Second, if we are clear on the goal, whether it’s a local optimisation or contributes to a global objective is often obscure. Then, if we know both that it’s a global optimisation and what the business objective is, that we constantly validate this with our customer through continuous delivery is rarer still. We then start the work. Often we optimise code to be fast, safe or effective where it’s not necessary, or spend time refactoring and reducing technical debt pre-emptively. This is a difficult balance to strike. Doing our work, we have WIP limits to stop work piling up and delaying our understanding of where the blockages are. But do we identify constraints?
Sometimes we get “blocked”, waiting on work from someone else. This could be unpredictable or it could be that we didn’t plan enough upfront. This is a constraint on our throughput. We can chuck more people at the problem with diminishing returns.
Sometimes the constraint is lack of developers, or lack of BAs or lack of POs etc. At this point, we must prioritise the most important work so that we deliver the most value under our constraints.
When we have more resources than we need, i.e. we are not the constraint in the system, we often do unprioritised work, effectively increasing the WIP in the system. This is a good opportunity to tackle technical debt (which is like reducing variable cost, as it means we can deliver more easily in the future) or to reallocate resources to the constraint, although in software this often incurs a “ramp-up” cost.