We create value in software development by building the right thing, building it well, at the right time.
The “right thing” implies identifying the most valuable work items, “building it well” covers the quality of what we produce, and “the right time” means getting it in the hands of our customer at the right time.
We need to wrestle this equation like a skipper wrestles the boat in turbulent waters. As the waves batter our delivery boat, the line between knowing what to work on next, balancing quality and knowing when to start becomes increasingly indistinguishable.
As delivery leads, we need to deal with boat breaking conditions, needing to perfectly balance the functions of timing, priority and long term sustainability of the system.
So, how can we navigate these choppy waters?
Let’s model this process working backwards, starting with the result — the delivered value.
To deliver this value, we have a specific build capacity (e.g. long-standing project or product teams). This capacity will fluctuate, but for simplicity, let’s assume for now, that it is fixed and stable.
Different types of work types put the demand for this capacity:
- features (product enhancements) — investments driven by a hypothesis of user needs
- regulatory requirements — needs that are imposed by industry regulatory bodies
- cost-saving needs — operational cost reductions
- technical improvements — technical investments of the platforms and products used to satisfy all of the above
And these items are identified by multiple sources, some closer to the team, some very distant:
- product management
- business outcome leads
- user feedback
- regulatory institutions
- delivery team itself
We can shape this demand as a funnel, which will look something like this:
And here lies the first of the challenges we face — the will always be more work that needs doing than our available capacity.
The need for prioritisation
Given that the demand outstrips the capacity, we need to develop a prioritisation mechanism.
But how do we do it? The idea of being able to prioritise seems a beautiful activity, a way for creating order to a volatile situation.
The sea of options on what to work is vast, and finding the optional solution for picking the right item to build is compounded by factors such as:
- items have a different value
- the solution space spans from problems well understood, to more complicated but where expertise or analysis helps, to unknown or even unknowable (*)
- some item can have expiration dates (think of seasonal features that customers want in time for Halloween, or Christmas)
From the collection of possible options, we need to find the right items, find those nuggets of value, at the right time.
“The problem with any prioritization decision is [it is] a decision to service one job and delay another.” — Don Reinertsen
We are continually trading cycle time for other things of value, and we are facing tough decisions — leave a feature out and release earlier or wait to build that feature and release later.
Budgeting — slicing the capacity
By now, we established that we need to tackle the challenging problem of prioritising our work, focusing on priority and timing.
But still, how do we do this? How do we work out the priority and timing? Given that not all work is the same (different value, different solution space, various expiration dates, etc.) we cannot apply a one size fits all to prioritisation.
Many aspects are outside our control, such as the value of an item (we can attempt to calculate it, but in reality, we don’t know how much our customers will value a particular feature). Or, our current levels (emphasis on current) of understanding of the problems space (of domain and technology) — we only know what know now, we cannot magic in an instance more knowledge.
What is in our control is the way we can allocate our capacity.
Let’s look at the capacity bit in more detail.
In reality, it looks more like sketched below. If plotted against the time, the capacity fluctuates. Holidays, attrition, hiring, rate of interruptions caused by multi-factors (often caused by poor management) contribute to capacity fluctuations.
What we can do with this capacity is to slice it into budgets, and allocate one slice for a different type of work.
The number of slices and their allotted percentages depends on the context, and of course, in itself, it can be challenging to achieve, but here is a heuristic that we can apply:
- allocate a percentage to “just do”, no regrets type of work that is obvious that needs doing, universally agreed by everyone as must do soon
- allocate a percentage to long term investments
- and, allocate a portion for regular development
By applying this approach, we can reduce the problem space and subsequently reduce the analysis time.
It is tempting to create sophisticated models to solve the budget allocations, and we need to be wary of not introducing significant errors hidden by the apparent sophistication of such models.
We can start by establishing the budgets at a macro-level, agreeing on the types of slices, the capacity allocated to each portion and the review mechanism of this allocation. Last but not least, it is essential to decide on qualitative measures that we can use in the review process — expected signs of success or possible signs of failure that we can foresee at the get-go.
How to prioritise within budgets?
Once we established the percentages for each budget, the ordering of items within the slices might still be a problem. However, given that we classified the issues at a higher level, and we split them into categories, we now can apply different prioritisation solutions for each of these categories.
The order of items from the “just do” slice should be hopefully self-evident (otherwise they would not be fit for this slice).
For the long terms investments, given their nature, that are long term, the need to prioritise at a more granular level should be reduced. For instance, if we want to improve our ability to release code faster, we will need to break the problem into smaller chunks and keep at it until we achieved our desired outcome. The order on which we tackle these chunks mightn’t be that important.
For the regular development slice, the “golden” standard is calculating the cost of delay, which is the opportunity cost of not doing something. Expressed as a rate, money per unit of time (e.g. £/month), it represents the foregone revenue or foregone cost-saving.
If we can calculate the actual cost of delay, then it is fantastic news. We then should use this value as an input into prioritisation. Also, if the duration is available as well (with decent confidence levels), then a Cost of Delay/Duration (also known as CD3) will provide a shortest weighted job mechanism.
Calculating a shortest weighted job mechanism needs further exploration — I believe that a forecasting approach is better suited (given that both the cost of delay and duration is often a range), something to explore on an explicit blog post on the topic.
If we cannot calculate these values in a reasonable amount of time with reasonable confidence levels, then there is still hope. We can split the items into different categories that model the profiles of cost of delay.
By classifying the items in one of these profiles, the answers to prioritisation should become more self-evident. We can now at least see what these items contribute to and ultimately, how they can contribute to our overall objectives. Selection should be now, while still complicated, simplified.
Same as a boat is battered by powerful waves, fighting for the attention of the skipper for course correction, our delivery machine fights for our attention to decide on what to work.
Knowing what to build in what order is one of the most challenging and important problems to get it right in delivery. It is the equivalent of avoiding the mistake of pointing the boat in the wrong direction. Or by not building a strong enough boat for the journey ahead.
Sequencing the work by taking into account the priority and timing are wholly grey. To achieve it we need to decompose the granularity of the problem — budgeting and cost of delay profiles are one of the means to achieve it.
As always, though, looking one-dimensionally to solve a problem is not enough. Strategy, road mapping techniques, complexity, and outcomes-based development are all influencing this solution space. But that is for another write-up.
Fair winds and following seas.