Test and Learn Guidelines for Media Buying

The Test and Learn Guidelines for Media Buying was developed by the Government Communication Service (GCS) Media and Marketing Data Team and OmniGOV.

These guidelines are aimed at all communicators and provide practical guidance and knowledge on how to incorporate test and learn approaches into campaign planning to increase the effectiveness of government communications.

On this page:


Foreword by Simon Baugh, Chief Executive Officer of Government Communications

The role of the Government Communications Service has never been more integral to supporting the safety, health, wellbeing and prosperity of our fellow citizens.

The fast pace of technological innovation is profoundly changing our world and opening up new opportunities for how we can deliver world-leading public communications. This rapid evolution within communications and marketing calls on us to modernise our skills and strengthen our collaboration across government, so we can share new ideas and grow our capabilities.

Continuous improvement and driving value for money is at the very heart of our professional excellence. As we embrace new opportunities and challenges together, adopting increasingly innovative approaches, while learning from each other, will propel us along this journey together.

Knowing what works well and improving our effectiveness in delivering outcomes are essential ingredients of a modern communications campaign. There are countless examples of inspirational work like this being delivered by teams across GCS, where data-driven insights and experimentation have driven better outcomes.

The GCS Test & Learn Guidelines bring together the most innovative and inspiring work currently happening right across government communications, together with best practices from across industry and scientific rigour, to explain how you can adopt experimentation approaches to support delivering increasingly efficient campaign outcomes.

These guidelines are provided to communicators across government to use as a common framework, helping you incorporate best practice Test & Learn approaches into your planning, and increase the effectiveness and impact of your paid-for campaigns. You will learn how to implement increasingly rapid cycles of listening, measuring, evaluating, and quickly adapting to what works best.

When I began in my role as Chief Executive Officer for Government Communication, I set out my responsibility to ensure we are ready for the future. I believe the skills and expertise of the people who make up GCS will be the ultimate determinant of our success, and embracing innovative approaches such as those outlined in these guidelines will help us grow and harness our data and insight skills to support us in creating a truly modern and effective government communication service together.

1. Introduction

Purpose

As the Government and Civil Service embark on a journey of modernisation, the guiding vision is centred on improving skills, innovation, and creating an increased collaboration for the delivery of excellent public services.

The COVID-19 pandemic required an unprecedented speed of delivery to protect the lives and livelihoods of the UK public, highlighting the importance of adapting quickly and the rapid need to improve our use of data and technology. The Declaration on Government Reform (June 2021) set out that “We will champion innovation and harness science, engineering and technology to improve policy and services”, putting data at the heart of decision making in Government.

The Government Communication Plan 21/22 reflects this approach and outlines a more united, modern profession running fewer, bigger and better campaigns. This calls upon us to explore new, innovative ways to solve problems, and seize the full potential of data and technology to deliver public communications in increasingly effective ways.

To meet this challenge we need to continually collaborate across government and with agency partners to implement test and learn approaches, in order to better understand how we can optimise campaign delivery. Importantly, these learnings need to be shared across government to create greater impact, enabling us as a profession to take a systematic approach to improving the effectiveness of government communication.

This document sets out test and learn guidelines for campaign teams, designed to increase the effectiveness of campaign delivery and marketing within and across campaign themes.

What is test and learn

Test and learn is a set of practices based on the scientific method used to build an evidence base for improvements and optimisation, and can be applied across a wide range of problems and disciplines.

For the purposes of this guide, it is important to first establish a shared understanding of test and learn within a marketing context:

“Experiments (test and learn): These are deliberate and granular, based on setting two or more options against one another and testing their effects based on isolating key factors like stores, regions, audiences, platforms or media channels. They are typically applied to measure short-term marketing impact only”

Source: Institute of Practitioners of Advertising, Effectiveness Advanced Cert definition, 2019

Test and learn methodology does not include other related quantitative methodologies such as econometric, social listening or correlation analysis – however, these can prove useful for generating ideas and questions for subsequent test and learn experiments (see Section 3).

Improving effectiveness and efficiency of media buying with test and learn Test and learn experimentation is a cornerstone of effective media buying, and HMG’s media buying agency OmniGOV have routinely applied test and learn strategies to government campaigns throughout the framework, with 31 live in Q2 of 2021.

To support increased collaboration and greater learnings within and across communication themes, GCS and OmniGOV have worked together to produce these Test and Learn Guidelines for Media Buying. This document provides practical guidance and knowledge on best practice considerations and evaluation methods for increased adoption across GCS, as well as outlining how learnings can be accessed and made available in a systematic way to better support knowledge sharing.

Contacts for support and further information

Government Communications Service

For further information or support in implementing test and learn for your campaign, please get in touch with the Media and Marketing Data team in GCS at data.gcs@cabinetoffice.gov.uk.

OmniGOV

For further support activating test and learns for your campaigns please contact OmniGOV’s implementational planning team at omnigovimplementationalplanning@manninggottliebomd.com.


2. Test and learns for HMG campaigns

Test and learns available to GCS

Campaign performance is routinely optimised by OmniGOV at a channel and campaign level to ensure it is both effective and efficient. Depending on the channel and media-buying method, optimisations can be automated or done manually, with campaign teams informed of any decisions made during regular status meetings while the campaign is live.

Experiments, or test and learns, can also be conducted alongside standard in-campaign delivery. They provide all stakeholders with a unique opportunity to systematically learn more about a particular element of a campaign’s performance in a controlled and statistical manner or test a new-to-market innovation from an advertising platform.

Some of the typical approaches to test and learn that are applied across GCS:

  1. Split testing, which compares the performance of two or more variants of the same creative or message to determine which one performs better.
  2. In-channel optimisation, which gives insights into the most effective routes to optimising ad performance and conversions.
  3. Regional or audience segmentation techniques, which can be used to target messaging to a specific demographic audience or geographic location to maximise the effectiveness of a campaign.

The first two approaches, split testing and in-channel optimisation, are commonly applied to government campaigns, and various case studies of each approach are provided throughout this document.

Campaign managers are encouraged to work with agency partners to identify and incorporate useful and appropriate test and learns within campaign plans, and also to reference these as part of any professional assurance process within GCS.

Criteria for conducting test and learns

When considering incorporating a test and learn as part of a campaign, we recommend that campaign managers are able to answer ‘yes’ to all of the following questions. If this is not the case, it is worth considering whether a test and learn is appropriate, or if there are other methods available to generate the required insights.

  • Has test and learn the potential to deepen understanding in one of the following areas: processes, technology, audience, analytics or behavioural insights?
  • Is there the potential for learnings to be applied to other campaigns within the respective organisation or across government?
  • Does the planned media spend on the test and learn meet the industry recommended minimum of £10k (excluding free tests) and not exceed a recommended maximum 10% of the total budget? This will be dependent on the test, channel and duration of the campaign.
  • Will statistical methods be applied to measure the results of the test?
  • If there are multiple tests scheduled for the campaign, are they sufficiently independent so they do not overlap, impact or influence the results of each test?

Please see the ‘Contacts for support and further information’ section if you have questions or would like support in considering and developing test and learns for your campaigns.


3. Incorporating test and learn into campaign planning

Process, roles and timeframes to incorporating test and learn

Throughout this section, we will detail the three key phases to incorporate test and learn into campaign planning. Below is an outline of these phases, along with the stakeholders that should be involved and the length of each phase:

  1. Establishing what to test: identifying opportunities for improvement and/or innovation should typically take 1 to 3 weeks and involve all stakeholders (campaign managers, planning, media activation and creative agencies).
  2. Building a robust hypothesis: defining the questions will take 1 to 2 weeks and require the involvement of the campaign managers, planning and media activation agencies.
  3. Develop a test and learn plan: along with setting key performances indicators (KPIs), this phase should take 2 to 4 weeks, with the campaign managers, planning and media activation agencies involved.

During the initial phase, it is important for campaign managers to consult with agencies and GCS colleagues on existing research and historic knowledge (for example through case studies, industry best practice, online material, etc) to help refine the question that they want answered.

Phase 1: Establishing what to test

When developing the question around which to conduct a test and learn, campaign managers are encouraged to consider emerging opportunities in the market to inform future activity. This can include a new or existing media channel where there are currently gaps in the understanding of its performance. Test and learn approaches can also be employed to pilot creative or messaging before a large launch (for example, to assess how different variants resonate with the audience and drive intent).

It is crucial to ensure that the test and learn is rooted in the wider communication and policy objectives. It is also important to ensure that the results will provide actionable insights that can be shared with other government campaigns to help deliver better outcomes. Therefore, collaboration between the campaign team, respective planning and creative agencies, and OmniGOV is strongly encouraged from the beginning.

Clearly articulating the question and developing a robust hypothesis is essential to ensure that incorporating a test and learn into campaign planning will lead to better outcomes. However, it is important to note that there are instances where experiments can have a negative impact on particular campaign outtakes or outcomes. This potential risk to wider campaign expectations and results should be taken into account at the planning stage and routinely monitored throughout its execution.

To mitigate against such risks, the hypothesis must be measurable and limited to short- to medium-term actions (i.e. within 6 months) that are expected to change significantly as a result of the experiment. Data required for specific test and learn KPIs would ideally be available in real time (hourly), but nothing longer than weekly data should be considered when evaluating results.

Available resources to assist in identifying areas for testing

Campaign managers may find it helpful to review existing insight resources and tools within their organisation, across GCS or from respective agencies. The following are some examples which have supported the development of previous test and learns.

GCS Benchmark Database

The GCS Benchmark Database is a library of media intelligence that enables the creation of benchmarks that support campaign planning and objective setting, and gives sight of how paid-for elements of a campaign’s performance compare against previous government campaigns. As of November 2021, the database contains over 390 PASS approved cross-government campaigns since FY17/18.

Campaign managers can use the insights and tools from the database as a source to benchmark their performance from historical activity and to build a hypothesis to understand what has worked for previous campaigns.

For further information relating to the GCS Benchmark Database, campaign managers and planning agencies should consult with their respective OmniGOV Effectiveness Leads or the Media and Marketing Data team in the Cabinet Office.

Case Study: GCS Benchmark Database used to inform test and learns for Royal Navy

The Royal Navy utilises video channels, in particular TV, to drive EOIs as shown from econometrics. However, with the varying landscape in terms of the standard audience Youth (16 to 24) in the TV marketplace, the team wanted to understand whether the current TV channel mix was optimal.

OmniGOV used the GCS Benchmark Database and share of voice analysis to compare TV media spend with other recruitment campaign benchmarks. The database provided test and learn ideas to diversify the video budget and meet changing audience habits.

The analysis showed that the video investment was higher than other recruitment campaigns, and that others use Cinema, YouTube and other video platforms to generate a bigger impact for this audience. Two new hypotheses were developed for the following campaign:

1. Disinvesting a proportion of TV spend in favour of Cinema will drive more EOIs.
2. Redistributing some TV investment across BVOD, Youtube and Cinema will drive more EOIs.

Marketing Mix Modelling

Marketing Mix Modelling (MMM) or econometric analysis takes a more top-down view and provides a historical lookback (typically 2 to 3 years) at a macro-econometric level. MMM uses regression techniques to quantify the incremental impact that media tactics have on a particular KPI. For example, MMM may indicate that the incrementality of digital display has been increasing over time, leading to a proposed test and learn to determine exactly what has been driving this change. In fact, econometric analysis was also used to also inform the above case study.

The application of MMM as an evaluation technique is campaign dependent, usually for established campaigns with a high media spend, as the analysis can be both costly and take up to two months to conduct. However, its use is expected to become more prevalent as a component in the development and use of ROI (Return on Investment) methodology.

Social Listening

Tracking volumes and online conversations around certain topics, or engagement with specific aspects of the media campaign, can also be useful to help build a hypothesis.

OmniGOV works closely with Talkwalker, a social listening company that tracks the global coverage (predominantly on Twitter) of paid, owned and earned media in 187 languages. The tool analyses text and content together with linguistic data modelling to process trends. Some organisations across GCS will have access to similar tools, such as Brandwatch, to monitor interactions with emerging news stories.

It is important to note that there is a growing recognition that social media listening should not be used as the sole proxy for broad sentiment around a topic. This is because popular tools can only analyse data from open sourced platforms like Twitter, and they cannot access conversations that take place in ‘walled gardens’ platforms like Facebook or Snapchat. Moreover, it has been shown through studies that it is often difficult for algorithms to accurately determine the sentiment and context of online conversations.

Correlational analysis

A data point or observation is a set of one or more measurements on a single unit of observation, for example social mentions and recruitment sign-ups. Reviewing if there are correlational patterns between two metrics can help develop questions that can be assessed through robust hypothesis testing.

In order to pursue this, outcomes data and a data source representing the test need to be readily available. The data level also needs to be detailed enough to find a correlation between the two metrics. It is recommended to use at least weekly, ideally daily data, to allow for enough data points to uncover a strong association and reflect the time-period being analysed. Once correlations have been identified the next step is to develop specific hypotheses that can test for the validity of the analysis.

Please note that correlational analysis on its own is not a robust analysis method and can sometimes give rise to spurious findings. As such, it should always be used as a hypothesis generator for robust test and learns that aim to establish causality.

Case Study: Correlation analysis informing a hypothesis for Armed Forces recruitment

Both the Army and the Royal Navy had developed a base assumption that specific calendar events had a significant impact on campaign outcomes, in this case recruitment patterns. OmniGOV worked with both departments to carry out volume analysis, correlating social media mentions relating to events and PR noise with registrations.

OmniGOV found positive and negative correlations between specific events and levels of registration. For example, increased noise around VE day had a positive impact on registrations. These correlations were further validated through econometric analysis and used to identify key events that media could be planned around to boost registration uplifts.

Phase 2: Building a robust hypothesis

Building a robust hypothesis is essential for effective and conclusive insights. A good hypothesis must be testable through experimentation, and there must be a possibility to prove the hypothesis right or wrong through the experimental design.

There are two core questions to consider when building a hypothesis:

What to doWhat might happen
The aspect or element this is proposed to test.The resulting impact conducting this test might have.

The more detailed and specific the hypothesis, the easier it will be to define the test and learn. A good example would be:

What to doWhat might happen
Communicating with Pakistani and Bangladeshi audience using paid mediaReduce the identified gap and frequency compared to all adults.

Whereas an unhelpful hypothesis is one that is vague and does not clearly define what the expected outcome will be. Such a hypothesis would look like the following:

What to doWhat might happen
Targeting diverse audiences.Deliver better outputs.

OmniGOV’s Hypothesis lab is an open opportunity for campaign managers, as well as planning and creative agencies, to submit any test and learn proposals they have for review and co-development by emailing hypothesislab@mgomd.com.

Case Study: Recruitment Marketing Forum hypothesis development

The GCS Recruitment Marketing Forum, comprising all cross-government recruitment marketing campaign teams, identified potential for activation efficiencies to be made through greater join-up of planning activity. Working closely with OmniGOV, both parties built two specific hypotheses to conduct test and learns across key channels:

– Biddable: Any biddable campaigns which run at the same time, on the same platform, with similar buying audiences will have inflated prices.
– TV: Running several recruitment campaigns on TV at the same time will not have an inflationary impact on media costs.

The above highlights best practice when building a hypothesis by addressing the two core questions, and ensuring learnings will have a significant impact for the campaigns and across GCS.

Once a robust hypothesis has been developed, the next step should always be to determine whether it has been addressed by another campaign in the past. Given the vast scale of government campaigns executed by OmniGOV, it is likely that case studies exist to support the hypothesis. Such case studies can be accessed via the OmniACADEMY Portal.

In the event the case studies do not fully address the hypothesis or there remain any gaps in the knowledge to be gained, an additional analysis should be conducted or changes made to a developing proposal to ensure it can be solved. Where there is no historical evidence, the test and learn should be incorporated into campaign execution.

Phase 3: Developing a test and learn plan

Test and learns need to be set up to be realistic and timely.

  • Realistic – will the information be available to answer the hypothesis?
  • Timely – does the timeline of the campaign (including the learnings) align with the expectations?

Understanding the role of the test and learn on the campaign plan should help demonstrate or gather the impact of that experiment.

  • What type of impact should be expected?
  • What metrics would represent that?

This should be captured alongside the overall campaign framework.

Case Study: Setting up test and learn for PHE’s Every Mind Matters campaign

Public Health England’s (PHE) Every Mind Matters campaign required OmniGOV to reach multiple different audience segments with individually relevant messages. In order to achieve the desired outcome of driving people to the ‘Your mind matters tool’, a comprehensive testing matrix was developed that set out the different targeting variables to be tested within each digital channel.

Throughout the campaign, Google Affinity data was used to understand more about those interacting with PPC (Pay-per-Click) ads, in order to identify additional higher propensity segments who may need to access information promptly. These groups were added to the targeting matrix to test and learn.

A test and learn plan requires a clear set of variables to test and to control. It is recommended that no more than two variables are selected to be tested and that all other variables are controlled where possible. For example, multiple creatives across different audiences could be tested in a multivariate testing framework (see Section 4) but no additional variables like regionality should be modified in the same experiment.

Below is a list of variables that can be used for either test or control as appropriate:

  • time
  • format
  • creative
  • messaging
  • regionality
  • audiences targeting
  • media buy type
  • owner – for example, changes in user journey
  • earned – PR activity

Within any given experiment, there will always be external factors at play that can impact the outcome. These need to be considered when constructing a test and learn, and the plan adapted accordingly to minimise their impact during the test. External variables should be monitored throughout the campaign execution, as well as the potential for further factors that can influence the final results. These could include changes in media consumption, pricing seasonality, policy, unplanned stoppages or extreme weather events.

Setting KPIs for test and learn activity

The KPI for a specific test or experiment may differ from the overall campaign KPIs – it should relate specifically to the testable hypothesis. It is important that the KPI metrics are clear and agreed upon, as well as the method for evaluation, to understand if a test has been successful or not and to ensure learnings can be used or scaled up for the future.

Typically, the test KPIs often link to media output (for example, engagement) or outtake metrics (for example, issue understanding), as they are conducted at a media channel level. On some occasions, however, where the test is larger in scale, it can relate directly to an outcome action such as EOIs, provided there is sufficient data and tagging in place. Campaign managers should consider the user journey on their sites and where users may be directed to external sites to complete outcome actions, such as the main GOV.UK domain. Managers should then liaise with their organisational digital/Data, Digital and Technology (DDaT) team for guidance on implementing any additional tagging requirements on their campaign website, in addition to checking the GCS website for the latest central guidance on implementing privacy-focused web analytics and measurement technologies.

It is important to ensure that any KPI is SMART, meaning it is:

  • Specific and Stretching
  • Measurable
  • Achievable and Agreed
  • Relevant and Results oriented
  • Time-bound

As mentioned on page 7 (Phase 1: Establishing what to test), consideration should be given to the impact an experiment may have on the overall campaign KPIs and ensure it can be measured in the short- to medium-term.

Case Study: Setting test and learn KPIs for NHSBT Blood and Plasma Donation

NHS Blood and Transplant’s (NHSBT) Blood and Plasma Donation campaign is focused on maximising the number of donor registrations, particularly amongst under-represented audiences such as Black African men.

Across 2020, OmniGOV worked with NHSBT to run a comprehensive in-channel test and learn programme, focused primarily on digital channels – including social and search. A hyper-localised targeting approach was deployed, with representation in creative overlayed.

Cost per registration was identified as a primary KPI and the closest proxy for registration efficiency. Activity drove a 1.3 to 2.5 times greater efficiency versus demographic targeting alone. To ensure activity could be properly evaluated, the team identified a conversion KPI as a user signing up to be a donor, which was tracked through implementing a tag on the form completion submit button. As the campaign progressed to Burst 4, donors were also able to register over the phone. To maintain our performance knowledge, call ads were implemented, so that users could sign up by directly clicking the ads on their mobile and going through to the call centre.


4. Test and learn measurement and evaluation

The implementation of any test and learn is conducted primarily by the media activation agency and the timeline is dependent on the particular circumstances (e.g. audience size) of the campaign. During this period, all stakeholders should attend regular status meetings to discuss the measurement and evaluation of results. This provides an immediate feedback loop that allows the activity to be adjusted if the desired outcome is not being achieved with the current approach or within the agreed timelines. The frequency of such meetings is dependent on the specific test and learn implementation and preference among stakeholders.

There are several evaluation methods that can be used depending on the type of experiment, data availability and KPI for each specific test and learn approach. Measurement should be seen as a joint effort between the campaign team and media buying agency, ensuring that a robust evaluation is in place to determine the statistical significance of the results.

Although not an exhaustive list, the below is a guide of which evaluation methods to consider by experiment type, with further details on each method below the table.

Test and learn approachEvaluation method
Split TestingA/B Testing
Multivariate Testing (MVT)
In-channel optimisationAttribution modelling
Regional or audience segmentation testingUplift modelling

A/B testing

These techniques assess whether two versions of the same variable (like a creative) result in a different impact on a metric. The most common way of determining the outcome of these types of tests is through the use of p-values to determine statistical significance. The p-value is the probability that our test result was due to chance. A result is considered significant, meaning a difference between two variants is accepted, with p-values lower than 0.05 (i.e. less than 5% probability that the difference is due to chance).

Case Study: Home Office A/B testing for the IIOC (Indecent Images of Children) campaign

For the Home Office’s most recent IIOC campaign, OmniGOV worked with Google to pilot a new buying beta – Pay per Active View (PPAV) – that aimed to optimise for more clicks in line with campaign objectives.

An A/B experiment was set up in DV360 (digital buying platform) where two ads were tested against each other, one running on a CPM basis and the other on PPAV. Both ads were otherwise identical, targeting the same audiences, and were measuring clicks as the success metric. This approach drove an additional 1,740 clicks (total clicks 17,190) and was also 37% more cost efficient.

Multivariate testing

Multivariate testing (MVT) is an advanced form of split testing that can analyse more than two variables within an experimentation framework to assess the relative performance of each one of them.

This technique can help gain a deep understanding of audience behaviour and intent. For example, it can help understand which creative approaches are most effective at driving behaviour change, which messaging best motivates target audiences, or which media formats are best at engaging audiences.

Case Study: Cabinet Office multivariate testing within social for the Coronavirus Public Health Campaigns

While COVID-19 vaccination uptake had been high across the UK, pockets of hesitancy had been observed amongst young adults. For the Cabinet Office campaign that aimed to increase COVID-19 vaccine uptake among this audience, OmniGOV worked with Topham Guerin to implement multivariate testing in Facebook to assess which motivational territories were most effective at driving intent to vaccinate, and to garner insight as to which creative territories and messaging were most effective at driving engagement amongst the same cohort.

Throughout three consecutive weeks, creative assets for four different motivational territories (convenience, social proofing, “fear of missing out”, and health) and an additional control asset (belonging to a previous vaccination campaign that targeted all audiences) were deployed to assess the relative performance of each with the 18-29 year old audience.

The results showed that health-driven messaging was the most effective at driving intent amongst younger audiences. Linking the vaccine as a means to being able to go out to bars and clubs (“fear of missing out”) generated engagement, but it drove a negative reaction amongst audiences with many citing a frustration towards the intended COVID-19 certification. The control assets were also among the least effective, suggesting that younger audiences were less motivated by a pre-existing campaign and proposition that had been successfully deployed to older audiences, instead responding more strongly to the nuanced, audience-specific messaging and creative territories in this experiment.

When segmenting the data further by gender and age, it was also observed that intent was relatively consistent amongst 18-24 and 25-29 year old cohorts, however a male audience consistently demonstrated more intent than a female audience in clicking to the NHS website to book their vaccination.

Attribution modelling

Attribution modelling techniques seek to determine the role that each touchpoint has in generating a conversion. They can help better understand ad performance and optimise across conversion journeys. Typically, attribution modelling is applied across TV and digital channels.

There are many versions of attribution modelling techniques that vary in complexity. Traditional approaches assign credits to unique or multiple touchpoints across the user journey. More recent data-driven models (DDA) provide a more detailed view of the user journey by assigning variable weights to different touchpoints.

Due to this increased complexity, a relatively small number of touchpoints (e.g. four points for Google’s DDA models) can only be considered, which may not fully reflect the current paths to conversion for most users. Further complications of these approaches are the increasing difficulty of data sharing across the walled gardens (for example across Google and Facebook), and the end of third-party cookies that will hinder these approaches, as they currently are heavily reliant on cookies to conduct the measurement. Other ID-based approaches may arise as the industry adapts to the end of third-party cookies, and this area will no doubt see immense innovation and development over the coming years.

For TV it is possible to run attribution modelling in partnership with analytic tools such as Adalyser or ADMO. These ingest the delivered TV airtime spot times and the timeprint of actions (in the form of website visits or phone calls). This information is then analysed and each spot is attributed a cost per action. This data can then be filtered by parameters such as station, genre, programme etc. However, research has shown that much of the effect of advertising does not lead to an immediate or short-term action, so methods that allow for the collection of single-source data (aggregating TV, online and offline) are gaining prominence.

Case Study: Department for Education (DfE) ‘Every lesson shapes a life’ Attribution modelling

AV is a key component of DfE’s media approach to support their long-running Teacher Recruitment campaign. To ensure continual performance improvement for this channel and the wider campaign, OmniGOV sought to incorporate data-driven measurement techniques.

TV attribution modelling was recommended and run as a means of identifying opportunities to optimise in-channel activity. The results of this modelling provided insight into the key programming and times of day which delivered the highest levels of response and in turn the greatest recruitment outcomes.

For instance, audiences were more likely to act early in the week, especially Tuesdays. Traditionally shows with younger-skewing audiences, like Love Island, were more likely to generate high response rates versus shows with wider appeal. Off the back of these results, TV phasing was up-weighted accordingly and this type of programming was included within a key strand of campaign activity.

Uplift modelling

Uplift modelling of a marketing campaign is a predictive modelling technique used to assess the incremental impact in response between a targeted segment and a control group. Campaign teams can use this method to measure the effectiveness a particular activity has on either a specific region or demographic group compared to all audiences.


5. Reporting and sharing learnings

The reporting and sharing of learnings from test and learns should routinely be applied across the campaign cycle. Although the capturing of results predominantly lies with the campaign managers and media activation agency, it is important that any knowledge is shared with creative and planning agencies to feed into the next planning cycle.

In-campaign analysis

If key metrics for a test and learn are monitored and reported during the campaign, learnings can often be applied quickly while activity is live. Analysis of trends seen and expectations from the hypothesis can be discussed during the regular status meetings, and if required, amendments to the media flighting, budget, or creative can be made at agreed milestones during the campaign. It is important that all stakeholders agree at the very beginning when these optimisations can and should take place, to allow enough time for the test indicators to be robust.

Post-campaign analysis

Where a larger test has been implemented, incorporating multiple channels for example, results are usually reviewed at the post-campaign analysis (PCA) stage. Results will be included in the post-campaign reporting document, typically produced 6 to 8 weeks following the campaign’s conclusion, with learnings shared and considerations linked to the overall strategy for future activity. This will improve how learning is codified and shared and helps to validate future campaign approaches to stakeholders.
Developing case studies

Whilst the details of any test and learn are captured at a campaign level, it is also important to share learnings across government to inform future tests and methods for analysis. This is usually done through case studies, which are typically broader than test and learn examples alone and can cover other aspects such as campaign performance and operational learning.

Case studies with insights from relevant test and learns are regularly shared via the weekly OmniGOV newsletter and the OmniACADEMY portal. If you require login support to the OmniACADEMY portal, email ghutchings@mgomd.com.

As shown throughout these guidelines, sharing as many examples as possible is essential to plan and carry out experiments that increase campaign effectiveness, so if you have any examples of test and learn experiments, please do send them to data.gcs@cabinetoffice.gov.uk.