No matter how much I wanted to avoid the topic of delivery timelines, it was always coming back – ‘how much can we build, when will it be done?’
These questions are significantly harder to answer when multiple teams are working together on the same problem — for instance they all work against one roadmap to build a product. Estimations are inherently difficult  and as soon as you go beyond a handful of deliverables, knowing what can be delivered it’s tricky.
Faced with this problem on one of the projects I was working on, my initial instinct was to visualise it on a wall, a wall made up of two areas: one containing committed items, those items which had a high degree of delivery confidence and one with options, items that we targeted to complete but we couldn’t back them with high confidence dates just yet. The items in the committed area contained a delivery date, the other ones just a target date.
Despite liking this idea on having side-by-side two tables, we quickly iterated on it by combining the two into one (under the assumption that one area was easier to process than two) showing each deliverable via a different colour post-it — green, amber and red. The colours were representing different confidence levels, green the highest confidence with the red the lowest.
Once we agreed this, we had to answer one more question — ”what do we mean by a delivery with a high degree of confidence?”. I found it a fair question and important in the context of multiple teams having a common language. What follows is my attempt to answer it.
1. High degree of confidence — Almost certain (“Green”)
In probabilistic terms we expected an 85% — 90% likelihood of delivering the work. We would express this plan as “almost certain” or “highly likely”.
In practical terms this meant:
- we had a start date of when the work would commence. This meant that the teams were in place and their members had all the means necessary to complete their work — skills, organizational structure, tools, environments, etc.
- we were in a position to estimate the duration of the work. To achieve this, we knew how many work items we needed to complete (scope). A range for this was ok, but for this confidence level lower ranges were more desirable. We had a view on how fast we can deliver (throughput); again, a range was acceptable. In determining the duration, we had also factored in the time it took for our dependencies to be met. Ideally, we could create plans based on forecasting using data and expert opinion. We would blend these together to get to a view on durations. For this degree of confidence, we avoided a T-shirt sizing approach (I consider this technique good only for relative comparison of items).
- we had shared our assumptions with the wider-programme and SMEs who would act as a sounding board for us. They were happy for us to proceed with these assumptions.
- risk have been identified and linked to either scope increases or lowerings of throughput.
2. Medium level confidence — Somewhat certain (“Amber”)
In probabilistic terms, such plan would have had a 50% — 80% likelihood of delivery. We would use words such as “probable”, “likely” or “we believe” to express how we felt about the plan.
In practical terms this meant that we were in the position of the same elements as for highly predictable plans (as in we knew the start date, scope, throughput, etc), but one or multiple aspects would be different by comparison:
- we would choose lower confidence levels from our forecast, a 50–80% likelihood range. Our experts would express their view that the deliverable was “probable”.
- our scope and/or throughput ranges were higher
- while our work was fine, the chances of our dependant work to be complete was lower, therefore our work was at risk of being delayed
- some of our assumptions were too weak; we would have liked to progress with them, but the rest of the programme or our SMEs were not that comfortable.
- we were carrying some high risks that didn’t have a clear mitigation plan
3. Low level confidence — Improbable (“Red”)
A probability of 0% — 45% likelihood of success. We would use words such as “probably not”, “unlikely”, “improbable”, “highly unlikely” or “almost no chances” to express how we felt about the plans.
In practical terms this meant that we could not put together a good model for forecasting, we were missing one or more key aspects of:
- scope and/or throughput; maybe the best we could do is T-shirt sizing.
- our assumptions were too weak and highly challengeable
- we could not identify or confirm our dependencies; the likelihood of them being met was low.
- risks were yielding a too bigger range of scope increases or reduced throughput
Now some small, but important prints.
Predictability of the system
The quality of our forecasts always depends on the quality of the information. If we’re forecasting using data, the predictability of the system is extremely important. Focus should be put fist on building a predictable system.
Every time when we are in a position of a new information, we should reforecast. This means that confidence levels can go up or down in time. Examples of relevant information that would prompt us to re-forecast could be:
- broken assumptions
- missed dependencies
- change of work (scope)
- changes in throughputs
- changes in team structure
In big parts this writeup is based on my learning of Dan Vacanti and Troy Magennis whose work on forecasting and systems predictability is brilliant and highly recommended to study.
Estimation — how can we estimate with confidence in software delivery?
This paper sets to introduce aspects that should be considered when putting together an estimate in software…
 Focused objective forecast spreadsheet — https://github.com/FocusedObjective/FocusedObjective.Resources/blob/master/Spreadsheets/Throughput%20Forecaster%20(2%20years).xlsx