Open dialog

Open dialog contains a selection of articles, white papers and discussion papers written by Dialog people which you may find of interest. You are able to subscribe to this page. We would like your feedback on any article. Please email us at

Open Dialog Article

6 Pitfalls of Benefits Measurement

Open dialog article,
By Ivor Jones & John D'Hooghe, Executive Consultant, Dialog IT

Internationally, work is being done to improve the maturity of Benefits Realisation Management (BRM) in both public and private sectors (Bradley, Jenner et al). Several problems confront this effort:

  • The discipline of BRM is “relatively new” and this leads to a scarcity of skill and experience
  • Benefits are often defined at the earliest stage of the project, when uncertainty is highest
  • Many benefits are of necessity intangible, therefore measuring both current and future states is highly subjective, and baseline data is often lacking

Current Benefits Realisation methodologies provide good guidance on BRM activities overall, but they contain little guidance to practitioners on potential problems with the design and execution of the measurement and reporting activities necessary to demonstrate Return on Investment.
This article concentrates on one problematic aspect of BRM which project team members are likely to be involved in, namely Benefits Measurement & Reporting. There are six common pitfalls when a project is called upon to report the benefits achieved, and the likelihood of encountering these pitfalls is increased by interactions between the immaturity of BRM as a discipline and the nature of project work and project people.
The time-honoured journalist’s questions – “Who, What, Why, When, Where and How” provide a useful structure to set out these pitfalls.

WHO Defines the Measures?

Firstly, the nature of project work by definition is to create something new in the sponsoring organisation. Therefore if the project team is asked to create new measures to illustrate the benefits of their unique and wonderful project, most project people will simply roll up their sleeves and get on with it.

This is Pitfall No. 1 – Measures defined in an ad-hoc fashion by unqualified people.

At this point, our hypothetical project person should be asking “Why does the organisation not know what benefits will be delivered and how to measure them?”. All too often at this point there is a tendency to do as asked and create project-specific measures, but this is not the right response. Creating new measures introduces cost and effort into the Business As Usual phase of the solution. Have these costs been estimated? Has management approved the budget variance? If not, don’t define new measures. Rather, you should be asking to see the organisation’s current list of measures (or a Measures database, if one exists) to select the most suitable existing measures.
Also note that the “Who?” question extends to accountabilities for each measure and benefit. If no-one is responsible for each measure and benefit, or the responsibility is not at an appropriate level in the organisation, this should be raised in the project or program issues register.

WHAT Should be Measured?

This question could also be phrased as “How much to measure?”. The point to establish here is: What is the minimum set of measures necessary to demonstrate your project has achieved its benefits? Even if the organisation has good BRM maturity and a comprehensive and up-to-date measures database, it is far from easy to determine how many of these existing measures are needed to demonstrate your benefits.

This is Pitfall No. 2 – Failure to re-evaluate WHAT is being measured.

Both under and over-measurement are undesirable. The former runs the risk that the project will not plausibly be able to demonstrate Return on Investment). The latter is clearly inefficient, and may damage the credibility of the organisation’s BRM efforts across the board.
This is especially challenging because it involves crystal-ball gazing. Benefits and their associated measures are almost always selected at the earliest stage of the project, when uncertainty is highest. You cannot avoid this.

WHY are we Measuring?

This is perhaps the most important question. Bradley implies that the lowest level of organisational maturity in BRM is characterised by the statement “Benefits are only relevant to getting the Business Case approved”. At the other end of the maturity spectrum are organisations that align their vision to desired outcomes, and these flow down into program objectives and on to individual projects. In these organisations you will know what to measure and, more importantly, why you’re measuring it.
On the other hand, measurement for its own sake creates unnecessary work for someone in the operational side of the business, and worse, once established measures have a tendency to ‘take root’ and can persist indefinitely.

This is Pitfall No. 3 – Measuring for the sake of it.

Remember the old adage “What gets measured gets done” – or if you prefer, measurement drives behaviour. For this reason, careful attention must be paid to measures not only to determine their accuracy, relevance and cost to collect, but more importantly for the possibly undesirable behaviours they will cause in the future.

WHEN should we Measure?

Contained within the “When to measure” question is the implied “how often”. The short answers are; “Measure early” and “Measure often”. However, because of the need to be aware of the cost of measurement and avoid the temptation to over-measure, it follows that frequency, duration and timing of measurement all require careful consideration.
At this point we also have interaction with the question of what is being measured. Considering the two extremes of organisational maturity in BRM again, at the low end we would expect to find an absence of pre-defined measures, new ones being created solely for the purpose of the project and consequently meaningful baseline measures are unlikely to be available.

This is Pitfall No. 4 – Not measuring often enough.

In this situation the tendency is for a one-off measurement such as a survey to be undertaken, and this can be largely a box-ticking exercise. It establishes a baseline of sorts, but of questionable reliability. It is also unlikely that this baseline will be updated during the life of the project, further reducing its usefulness. By contrast, in a mature organisation we could expect to find measures suitable for use as a baseline and for these measures to have a history of several years. This may provide an understanding of any natural or seasonal variation in the measure.

WHERE should we Measure?

In this context, deciding where to measure means finding the part of the organisation best suited to carry out the measurement and analyse the results. A short-cut to finding this area is to ask “Where does the money come from to fund the measurement activities?”

This is Pitfall No. 5 – not budgeting/setting responsibility for measurement activities.

The tendency of measurement to drive organisational behaviour discussed in Question 3 needs to be carefully considered.
There may be resistance to accepting responsibility for collecting data, so look for champions and for the opportunity to sell the advantages of Benefits Management (e.g. the ability to demonstrate not just return on investment but also organisational maturity). Also be aware that if the organisation does not yet have a measurement database, you may be laying the ground-work for its first version, so try to make your contribution re-usable rather than project-specific.

HOW to Present the Measures?

First, understand the context of the measures (e.g. scoring models, benefit contributions, or economic models), and take time to understand the data you have collected. Before presenting results, it is a good idea to familiarise yourself with comparative techniques such as Statistical Process Control, and to gain an understanding of how best to use different types of graphical presentations of numbers (e.g. the pros and cons of pie charts, bar graphs, scatter plots).
This should not really need saying, but present your data truthfully. If it is inconclusive, say so. Too often a point-in-time survey is presented as a ‘baseline’ when it is nothing of the sort, and this can harm the credibility of BRM in general.

This is Pitfall No. 6 – inadequate thought given to presentation of measures and conclusions.

If you have a minimum of 12 data points you can draw a line and understand its variation. If you have a single point, (e.g. a pre-deployment satisfaction survey) you cannot turn it into a line, so don’t call it a baseline, call it what it is – a reference point.


Designing and implementing a benefits measurement and reporting system is a complex and demanding task requiring forethought, planning and a deep understanding of both the organisational context and ‘best practice’ in the BRM discipline. Benefit Realisation tools such as Amplify  from Connexion Systems can assist in structuring the benefits and measures. It can also manage the gathering of the measures at regular intervals.

Dialog Information Technology can assist organisations receive payback on the time and expertise invested in this task by implementing a measurement system which:

  • demonstrates alignment to organisational vision and program objectives
  • is balanced, contains a mix of tangible/intangible benefits, immediate/ongoing benefits, subjective and objective measures
  • is effective, efficient and sustainable (i.e. able to substantiate Return On Investment without using excessive resources).


90 Days to Success as a Project Manager. Paul Sanghera, PhD, Course Technology PTR, ISBN-13: 978-1-59863-869-1
A ‘Fool’s guide to Benefits Management’. Stephen Jenner.
Benefits Realisation Management (2nd Edition), 2010. Gerald Bradley. Gower Publishing, ISBN 978-1-4094-0094-3
Queensland Government CIO Framework for Benefits Management
Benefits Management in an Agile World, Tony Scuteri, Connexion Systems
The Visual Display of Quantitative Information, 1995, Prof. Edward Tufte, Graphics Press, ISBN 0-9613921-0-X

Reference this article: Ivor Jones & John D'Hooghe, 6 Pitfalls of Benefits Measurement (2012-04-17) Open Dialog - Dialog Information Technology <>

Learn more about Dialog Information Technology

I am looking for an experienced IT service provider.

Discover our Expertise

I am interested in joining Dialog Information Technology.

Careers Available

I would like to learn more about Dialog Information Technology.

Find out More
  • Involved
  • -
  • Committed
  • -
  • Can Do
  • -
  • Always