Browse our range of reports and publications including performance and financial statement audit reports, assurance review reports, information reports and annual reports.
Take our Insights reader feedback survey
Help shape the future of ANAO Insights by taking our reader feedback survey.
Performance Measurement and Monitoring — Developing Performance Measures and Tracking Progress
Please direct enquiries through our contact page.
This edition of Audit Insights summarises key messages from a series of ANAO performance audits assessing performance evaluation frameworks and success measures. It discusses the importance of both in keeping entities accountable and their performance transparent.
Introduction
The purpose of the Commonwealth Performance Framework is to enhance the transparency and accountability of the public sector.
Performance measures, at the project, program and entity level, exist to ensure entities are delivering what they have been tasked by government with delivering – for example, was the service delivered within a certain timeframe, did it reach an agreed number of people or businesses, or did it have a particular economic or social impact. If entities reliably measure and report on the achievement of outcomes, Parliament knows if their investment of public funds has had the desired impact.Performance information, and how it is used to demonstrate whether public funds are making a difference and delivering on government objectives, is the focus of the Public Governance, Performance and Accountability Act 2013 (PGPA Act).
The government agreed in principle to a recommendation (2b) made in the May 2018 review of the Australian Public Service (APS), to promote continuous improvement through a range of measures, including better performance reporting. This was also recommended in the 2018 review of the Public Governance, Performance and Accountability Act 2013 (PGPA Act) and Rule. This reflects that entities will not improve outcomes for the Parliament or the community unless they are accountable for the services they deliver.
Recommendation 4 of the review of the Public Governance, Performance and Accountability Act 2013 and Rule, with which Government agreed
, states 'the Secretaries Board should take initiatives to improve the quality of performance reporting, including through more effective and informed use of evaluation, focusing on strategies to improve the way Commonwealth entities measure the impact of government programs'.Audit committees have a role to play in providing assurance to the Accountable Authority that performance reporting is reliable, relevant and complete, taking into account the objectives and purpose of the entity and the programs it is delivering. As outlined in paragraph 1.2.2 of Department of Finance guidance for audit committees
, the PGPA Rule requires that the functions of an audit committee include reviewing the appropriateness of the accountable authority's performance reporting for the entity (subsection 17(2) (b)). To fulfil this function, an audit committee must review the entity's performance information, systems and framework and the completeness and appropriateness of performance reporting.The Auditor-General presented a series of audits2018–19 and 2019–20 on the effectiveness of program implementation. These audits assessed whether Commonwealth entities had put performance reporting arrangements in place that provide transparency into whether outcomes were achieved.
inKey areas of performance information, monitoring and evaluation learnings covered in this insights publication, include:
- Plan for the evaluation of performance;
- Establish governance arrangements;
- Develop appropriate performance measures;
- Monitor and track entity performance; and
- Implement program changes.
Plan for the evaluation of performance
While an evaluation should be planned during the design phase of a new or amended program or policy, a clear strategy (what you want the evaluation to inform and how it will do this) for reviewing and evaluating existing policies and programs is also needed. This is important because:
- a very high proportion of government spending is on an ongoing basis – which means regular evaluation keeps the program or policy implementation on track and achieving the intended outcomes in a changing environment;
- the nature and outcomes of the program or policy may have evolved over time into something different;
- many government policies and program, such as in health, education and social services, may have effects that only manifest over extended time periods. Effects usually extend well beyond the usual time scales of evaluations, and indeed may often occur well after the program has been terminated. It is important that a framework be developed which allows this to be considered; and
- some short term or pilot programs are used to inform future design and policy processes, and evaluation helps identify lessons learnt and necessary areas for improvement.
The Australia and New Zealand School of Government (ANZSOG) suggests that an integral part of effective and efficient public sector management is a 'rigorous, evidence-informed approach' to the design and implementation of policies and programs.
ANZSOG proposes that 'the effectiveness of programs and policies, including their implementation, would be tested against clearly articulated program objectives, and wider consequences, within a systematic evaluation framework'. Understanding the objective of the program is essential to defining the purposes of performance evaluation, how it will be used and by whom . Understanding the objective is also essential to establishing the best measures of success.Evaluation is a critical element in establishing accountability for program performance and ensuring ongoing achievement and improvement. This is best enabled via an evaluation framework.
Audit examples
The department has established an implementation process for the Indigenous Advancement Strategy evaluation framework, which could be improved through more regular reviews of its project activity schedule. Implementation has included a range of activities designed to improve evaluation quality and build evaluation capability (Report reference: p.27). There were several implementation planning activities completed, including developing a program logic, project plan and project activity schedule. The ANAO reviewed the project activity schedule and found all activities the department had committed to in its IAS evaluation framework document could be mapped to an associated activity in the project activity schedule. (Report reference: p.28)
The Australian Digital Health Agency developed a benefits realisation plan in 2017 which defined success for the opt-out model of the My Health Record system. This plan's purpose was to set out a strategy for managing benefits of the My Health Record system, and to make sure that benefits realisation was 'planned and budgeted, underpinned by a whole of lifecycle process … appropriately resourced [and] backed by focused governance structures'. (Report reference: paragraph 4.17)
Establish governance arrangements
The PGPA Act sets out the key responsibilities of accountable authorities. PGPA Act Part 2-2 Division 2 sections 15 and 16 outlines the duty of accountable authorities to govern the Commonwealth entity in a way that promotes the proper use and management of public resources and to establish and maintain systems relating to risk management and internal control.
Effective governance arrangements drive accountability for performance by allowing appropriate oversight of program or policy delivery, including how risks are being identified, reported and managed. The Department of the Prime Minister and Cabinet's (PM&C) Cabinet Implementation Unit toolkit outlines the importance of planning for evaluation over the life of the initiative, and advises that baseline information should be identified as part of the planning process.
. This allows performance to be measured over time, against consistent performance measures, so that achievement is comparable year on year.To ensure the effectiveness of a new policy or program, an implementation plan should be developed before commencement. The plan should clearly articulate how new processes, programs and services will be delivered on time, on budget and to expectations. Key areas to consider during implementation planning include project scheduling and management, governance, stakeholder engagement, risk management, monitoring and evaluation (reflected in an evaluation plan), and resource management.
PM&C requires an implementation plan for new policy and program proposals that have significant risks or challenges. PM&C recommends that implementation plans include key milestones, details of senior responsible officers, and details of key risks and mitigation strategies.PM&C guidance states that 'evaluation plans should determine the purpose, the timing, the mechanism to be used, and how results will be shared and applied'.
An evaluation plan that is implemented ensures consistency in evaluations and maximises the benefits of each evaluation.Good practice indicates that activities outlined in the evaluation plan should closely align with performance measures. The plan should be kept up-to-date over time as new initiatives are incorporated, the program is expanded, or experience informs adjustments to the program.
When delivering large-scale and long-term initiatives it is important to measure success at key milestones, or regular review points. This can be done by developing interim performance expectations backed by practical implementation plans. It may also be necessary to invest in developing new and improved data sources or more frequent data collections. Identifying what works and why it works helps drive towards better outcomes.
Audit examples
The department has designed a performance management framework for the Cape Class patrol boats comprised of critical success factors and key performance indicators that are broadly aligned with the government's availability and performance requirements. The framework also includes an abatement regime where a significant portion of the in-service support payments can be withheld or a debt incurred if the contractor fails to achieve the critical success factors or key performance indicators (Report reference: paragraph 17). Note that it is important that the framework then be applied as intended.
Implementation governance arrangements were clear and appropriate. The Australian Digital Health Agency established clear governance structures for the My Health Record expansion program, including a dedicated program board and program delivery committee. Progress reports were provided to the majority of ADHA Board meetings. (Report reference: p.24)
Develop appropriate performance measures
The Commonwealth performance framework requires entities to publish performance information that outlines how the entity's performance will be measured, and the results of that measurement. This allows an assessment of an entity's progress towards achieving the purposes and the objectives set by government for the funds received.
Performance measures should be established and reported against to assess the effectiveness of the government programs and policies. Data collection systems established from the outset and adapted during the lifecycle of a program can assist to both review overall program performance and improve service delivery.
Establishing an appropriate performance framework up front will position an entity to assess the extent to which it is achieving the intended purpose. Purpose statements allow an entity to describe how it will achieve the intended purpose — including both who will benefit and how — and should draw on an entity's objectives and functions. Performance measures should be clearly aligned with the entity's purpose statement and be relevant, reliable and complete to enable an assessment of overall progress against purpose.
Better performance is likely to result when the purposes of an entity and its programs are clear and senior leaders organise resources and activities to meet those purposes. Quality performance information assists an entity's leadership to determine whether it is delivering or not and, if not, guides them on possible solutions.
Early performance measurement of indicators of failure, or slower than desired progress towards objectives, can help the entity to identify avenues for reducing the risk of failure to deliver.Appropriate performance measures are:
- relevant — they clearly indicate who benefits, and how they benefit from the entity's activities; they address a significant aspect/s of the entity's purpose; they provide sufficient information to inform the reader about achievements against objectives; and they are clear and concise;
- reliable — they use and disclose information sources and methodologies that are fit-for-purpose (including a basis or baseline for measurement or assessment, for example, a target or benchmark); and they are free from bias; and
- complete — they provide a balanced examination of the overall performance story, and collectively address the entity's purpose.
Following the review of the PGPA Act, the PGPA Rule amendment was published in March 2020 and the Department of Finance republished Resource Management Guide (RMG) 131 Developing good performance information in May 2020. This Guide states that:
Effective performance measurement enables entities to:
- measure and assess their progress toward achieving their purposes;
- identify what policies and activities work, and why they work, in order to inform policy development;
- drive desired changes in the efficiency and effectiveness of services;
- demonstrate whether the use of public resources is making a difference and delivering on government objectives;
- make decisions about how best to deploy its resources to achieve competing priorities; and
- identify, report, demonstrate and promote their achievements and explain any variance from expectations or reference points.
RMG 131 adopts the same principles as outlined in this insights publication, requiring that performance measures be reliable and verifiable; free from bias in how they measure and assess performance; include both quantitative and qualitative measures; and include measures of efficiency, effectiveness and outputs. It also states that there should be a way to assess these over time.
Audit examples
Combining quantitative data (survey data) with qualitative data (case studies) can enrich the performance story, however, the National Archives of Australia has not defined a methodology that will be used to collect information and determine how case studies will be selected for inclusion in the annual report to ensure that the case studies selected are free from bias. (Report reference: p. 37, paragraph 3.10). It was recommended that the National Archives capture consistent performance information to enable accurate analysis of the performance of entities to implement targets over the life of the policy (Report reference p.40, paragraph 3.23).
The completeness of performance criteria requires improvement by all entities through developing measures of efficiency, and demonstrating an entity's intended progress across the life of the corporate plan and beyond. (Report reference: p. 11, paragraph 23)
Finance guidance outlines the critical considerations for developing good performance information including using an understanding of an entity's purposes to identify a set of measures that demonstrate the extent to which those purposes and activities are being delivered efficiently and effectively. As it is rare for a single measure to be able to adequately determine the effectiveness of an activity, Finance guidance advises that good performance information will draw on multiple sources and the quality of performance information should be emphasised over quantity. The guidance recommends a small set of measures that is sufficiently comprehensive to cover those factors that affect an entity's performance. (Report reference: p. 51, paragraph 3.65)
Monitor and track entity performance
Successful implementation is underpinned by effective monitoring, review and evaluation processes. Effective monitoring collects timely and relevant information that allows progress to be tracked towards outcomes and adjustments made as necessary. Progress should be tracked in a deliberate and systematic manner during implementation. Implementation planning must define the data to be collected and relied on in the performance measures and targets and the method used for monitoring. Relevant stakeholders need to be engaged for monitoring, review and evaluation activities to be successful and fully inform performance outcomes.
Establishing appropriate mechanisms for monitoring and evaluating the performance of a policy initiative is an important aspect of implementation planning.
Entities should ensure that performance monitoring and reporting arrangements meet the needs of stakeholders, and the measurement of results are used to inform planning and budgeting activities, support accurate assessments of risk, and provide assurance that service delivery is effective.
Regular reporting to key internal stakeholders of progress in achieving the performance measures, and other commitments outlined in an entity's corporate plan, supports good decision-making and is an indicator of the adherence to the entity's corporate plan.
Audit examples
Based on a review of agency internal documentation and meeting papers and minutes of oversight bodies, the ANAO found the annual data specification process has been conducted consistently since 2009, with regular oversight from the Performance Information Management Group and Steering Committee. It is good practice to publish specifications for performance indicators at the level of detail that has been demonstrated for the Closing the Gap indicators, and this process should be retained for the refreshed Closing the Gap framework. (Report reference: p. 36)
The Director requires performance monitoring plans to be established for each park. These plans are intended to provide evidence about the Director's performance in managing each park and enable adaptive management and decision-making about park activities and investments. (Report reference: p. 32, paragraph 2.43). To be effective, the plans must be implemented and performance reported on as intended.
Effective monitoring requires an approach that accurately tracks progress and records the actions of the business area or individual responsible for implementation. The goal is that those with entity accountability can have a clear line of sight to the implementation of agreed recommendations.
Some benefits measurement activities are underway, but they are not yet organised in a research delivery and evaluation plan setting out milestones, timeframes and sequencing of activities over forward years. (Report reference: p. 8, paragraph 9)
There are appropriate mechanisms to improve the quality of information entered into the system, such as: procedures to detect and correct administrative data errors; processes to promote consistency in how information is entered into the system; and data quality education and training activities. Work to monitor and improve data quality will need to continue as use of the system increases, especially if different types of users, who may not have accessed awareness and education activities, increase their participation over the coming years (such as medical specialists, allied health and aged care providers). (Report reference: p. 9, paragraph 20)
Implement program changes
Performance monitoring and evaluation should take place across the lifecycle of a program, from initial design, through to implementation and ongoing delivery, and at the end of the program's life.
Continual review and amendment of implementation strategies will help entities to make sure that programs are targeting activities of greatest impact, and desired objectives are on track to be achieved, as circumstances change over time.
Audit examples
As part of its interviews with the Indigenous Affairs Group (IAG) program managers, the ANAO sought to determine what processes were in place to ensure evaluation findings were used to support decision-making and improvements in service delivery. Although there were no formal processes in place (outside of the management response process discussed at paragraph 4.37), IAG program managers provided four instances of where evaluation findings had been used to inform program and policy development:
- guiding reforms to the Community Development Program;
- developing training for staff delivering the Remote School Attendance Strategy program (see Case Study 1);
- refinements to the Indigenous Procurement Policy; and
- facilitating improvements in collaboration and training in the delivery of the Army Aboriginal Community Assistance Program. (Report reference: paragraph 4.43)
Continuous improvements are being made to the delivery of the Humanitarian Settlement Program (HSP), with further work required. A number of benefits were intended to be gained when moving from the previous programs to the HSP. These improvements were largely reflected in the HSP contracts but the benefits have only been partially achieved. For example, shifts to outcomes focused contracting, improved data and a new IT system have only been partly achieved. Other improvements to the HSP have been identified via a number of reports and reviews. Flowing from this, Home Affairs prepared a high-level schedule with seven elements to improve the HSP. This includes finalising and implementing the recommendations of the 18 month review of the HSP. (Report reference: p. 9).
While lessons from ARENA's evaluations were mostly actioned appropriately, there is scope for ARENA to strengthen its arrangements for closing recommendations. (Report reference: p. 9).
The Australian Renewable Energy Agency's grant management framework recognises that there are circumstances where it is appropriate to vary a grant. Effective variations management ensures that varied grants continue to deliver value for money, achieve program outcomes and do not create probity issues. (Report reference: p. 43, paragraph 3.22).
Appendix
Auditor-General reports which have examined performance measurement and monitoring
Auditor-General reports that have identified key learnings relating to performance measurement and monitoring have included:
- Auditor-General Report No.17 2018–19 Implementation of the annual performance statements requirements 2017–18
- Auditor-General Report No.21 2018–19 Cape Class Patrol Boat – In Service Arrangements
- Auditor-General Report No.27 2018–19 Closing the Gap
- Auditor-General Report No.32 2018–19 Funding Models for Threatened Species Management
- Auditor-General Report No.45 2018–19 Coordination and Targeting of Domestic Violence Funding and Actions
- Auditor-General Report No.47 2018–19 Evaluating Aboriginal and Torres Strait Islander Programs
- Auditor-General Report No.49 2018–19 Management of Commonwealth National Parks
- Auditor-General Report No.3 2019–20 Defence’s Quarterly Performance Report on acquisition and sustainment
- Auditor-General Report No.9 2019–20 National Ice Action Strategy Rollout
- Auditor-General Report No.11 2019–20 Implementation of the Digital Continuity 2020 Policy
- Auditor-General Report No.13 2019–20 Implementation of the My Health Record System
- Auditor-General Report No.17 2019–20 Delivery of the Humanitarian Settlement Program
- Auditor-General Report No.35 2019–20 Grant Program Management by the Australian Renewable Energy Agency