Evaluation Framework 2.0

This updated Evaluation Framework is for communicators across the wider public sector to assist in measuring the success of our work and appraising our activities.

Evaluation Framework 2.0

PDF, 2MB, 32 pages

Details

The framework provides guidance for major paid-for campaigns and other communication activities.

Contents

  • Introduction
  • Evaluation framework 2.0
  • OASIS for evaluation
  • Consistent metrics
  • Behaviour change
  • Recruitment
  • Awareness
  • All communication activity
  • Example dashboard
  • Calculating return on investment
  • Return on investment: a worked example
  • Measuring and managing reputation
  • Introduction to the government Data Ethics Framework
  • Library of further resources

Introduction

This updated Evaluation Framework is provided to communicators across the wider public sector to assist in measuring the success of our work and appraising our activities.

Evaluation remains a critical function for delivering effective communication activity, and this guide will help colleagues plan campaigns in a way that can be meaningfully evaluated. This will drive improvements across our profession, including our capability to provide impactful behaviour change and policy delivery.

Ultimately this is about listening to stakeholders and the public so that we know which messages are landing, and how we can learn from them to make our communication more effective.

This Framework builds on the foundations created by the International Association for the Measurement and Evaluation of Communication (AMEC) and the Evaluation Framework that was a product of the Evaluation Council in 2016.

These have been tailored to reflect our public service role and the latest campaign optimisation principles developed by the Engage programme, which brings data and science to the heart of our communication activity.

The new Framework edition is primarily aimed at paid campaign activity. It adds further guidance on calculating return on investment (ROI), recommends specific metrics for measurement depending on your campaign type, and enhances the guidance on measurement methods. It also introduces guidance on measuring reputation and the ethical use of data.


As all government communicators will know, successful evaluation depends entirely on setting meaningful C-SMART objectives. These are SMART objectives, with an additional C for ‘challenging’.

New guidance is provided on the best practice for setting objectives in an OASIS plan to effectively evaluate communication activity and calculate the benefits.

Running a successful campaign requires clear objectives, underpinned by a theory of behaviour change that understands how communication activity will be effective. The GCS team have a number of publications and guides to assist campaign planning, the theory of behaviour change, and all elements of campaign planning.

This guide should not be used in isolation but will assist in effective evaluation from the outset of planning a campaign. All communication activities should consider evaluation and understand that measurement enables evaluation, which in turn becomes insight for future activity.


Evaluation framework 2.0

This Framework provides guidance for major paid-for campaigns and other communication activities. The Engage programme has identified three distinct types of funded campaign activity.

Behaviour change

The vast majority of government communication seeks to change behaviours in order to implement government policy or improve society. In the benchmarking categories that sit within the Engage programme, we also distinguish the main types of behaviour change: start, stop and maintain.

This way we can start to learn about which methods, messages and channels are effective for certain types of behaviour change. Raising awareness will nearly always be part of an activity to change behaviour and should also be measured.

Recruitment

Recruitment is a specific form of behaviour change where people are encouraged to start an activity. This is a major concern for government and is vital to maintaining public services and protecting the country.

Government invests a lot of money in recruiting people for important jobs (teachers, armed forces, nurses etc) and so this campaign type has been isolated because of its size. This is targeted at major employment campaigns rather than recruiting people to ‘register’ or ‘take part’.

Awareness

Some campaigns solely seek to raise awareness of an issue or to change people’s attitudes. Raising awareness will almost certainly be an intermediary step for all communication activities, so behaviour change campaigns are also encouraged to measure awareness. The awareness metrics here are mainly suggested for campaigns that seek to change attitudes but not immediately change behaviours

Evaluation metrics

Each of these campaign types has a set of recommended evaluation metrics.

Consistent use of these metrics will assist campaign planners in choosing appropriate objectives and enable our profession to establish benchmarks for success.

Metrics are divided according to the 4 categories identified by AMEC:

  • inputs
    • (what we put in, our planning and content creation)
  • outputs
    • (what is produced, such as audience reach)
  • outtakes
    • (subject-oriented stakeholder experiences and communicator-oriented learning about communication practice)
  • outcomes
    • (stakeholder behaviour, what the impact of communication and engagement activity is, and whether we achieved the desired organisational impact or policy aim)

The most important of these are outcomes: how effective communication activity is in achieving policy aims and delivering organisational impact.

Outtakes are also important for measuring how well communication activity has worked, for example, by assessing the penetration of a campaign message.

This Framework also provides a set of metrics that can be used to assess low-cost and no-cost activity, which can equally be applied to internal communications and stakeholder engagement activity.


OASIS for evaluation

The OASIS campaign planning guide provides government communicators with a framework for preparing and executing effective communication activity. Within OASIS, Objectives and Scoring are especially important for the purpose of evaluation.

Objectives

Objectives should be C-SMART:

  • Challenging
  • Specific
  • Measurable
  • Attainable
  • Relevant
  • Time-bound

For the purpose of evaluation it also important that objectives contain three elements.

1. Baseline

A numerical prediction of what will be observed if no communication activity takes place. Some people would take out a pension even if the government ran no communication activity.

A baseline should be set using the most recent data available, but some subject areas can use data from last year’s campaign, or exceptionally even earlier.

In most cases, we can assume that the no campaign activity observation would be same as the last relevant measurement. It is important to consider predictable movements of the baseline in addition.

If it is cost prohibitive to establish a baseline specifically for campaign purposes, planners can use pre-existing publicly available data, research commissioned by policy colleagues, or a proxy measure as a substitute.

2. Change

A numerical forecast of the difference that the campaign activity will make. For example, improving a level of registrations from a baseline of 80,000 to 100,000 (an increase of 20,000 or 25%).

Changes should be for a defined period of time, typically three to six months after a campaign for many large behaviour change campaigns.

Where major changes are targeted over a longer period (e.g. five years) then milestones or intermediary targets should be provided for a period of no longer than one year.

3, Explanation

Campaign planners should use an evidence base as a justification for the level of change that is being targeted.

Typically this might be in line with previous observations from the last time the campaign was run or by comparison to other similar campaigns that can offer some guidance on what level of change could reasonably be expected.

Making assumptions is acceptable as long as they are clearly identified and justified. It is important to signal if the behaviour targeted is a start, stop or maintain behaviour.

It is important to distinguish the effect of the campaign from other influential factors such as seasonality, fashion, public concern and evolving social norms.

There is more specific guidance available on Audience Insight (especially motivations and barriers), Strategy and Implementation in the full OASIS guide.

Scoring (Evaluation)

Evaluation, focusing primarily on outcomes and outtakes, should take place throughout a campaign and will inform dynamic optimisation of active campaigns.

It is recommended that approximately 5 to 10% of total campaign expenditure is allocated to evaluation. In addition to operational data, evaluation costs will often include commissioning research to measure awareness and message penetration levels.

A complete evaluation will include the following aspects.

A comparison of actual outcome data with targets set in objectives. Were the objectives met? If not, what reasons can be offered to explain the variation? If the objectives were surpassed, what has driven that?

A comparison of outtakes with the targets set in objectives. This will typically include various data sources such as qualitative and/or quantitative research findings.

Considering the causal link between the subject-oriented outtakes and the outcomes. Some campaigns will be more effective in converting awareness and attitudinal changes into tangible
behavioural outcomes. To what extent could this campaign convert awareness to behaviour change?

Findings for current or future campaign optimisation. Ideally this will include attribution modelling and econometric analysis (scientifically assigning a proportion of ‘cause’ to different elements, messages and channels of a campaign). Even without advanced studies, campaigners can often draw conclusions about which channels have been more or less effective than anticipated. Is there anything that others can learn from your theory of change?

For active campaigns, it is advisable to make small incremental adjustments (a slight up or down weighting of elements, messages or channels) to test theories for improvement. Ideally, activity will be tested in a pilot beforehand, but this can even be in a live activity by varying message content and delivery channels.

A conclusion includes whether or not the campaign was successful in achieving its policy aims. This should also include what we would do differently next time or for future similar campaigns.


Note: This publication was published in 2018. Updates shown in the footer (July 2022) are about edits to the layout of the page and not updates to the guidance.