GCS generative AI policy

Continuing Professional Development (CPD) points: 2

The Government Communications Service (GCS) is committed to embracing responsible use of generative AI technology to deliver effective, engaging and increasingly relevant communications to the public.

Through the responsible use of generative AI and commitment to our public service values, we aim to set the standard for excellence in government communication in the AI age, and inspire trust and confidence. The policy set out clear principles for all GCS members to follow in their use of AI within their organisations.

Our aim is to seize the benefits of the revolution in generative AI, and ensure all of Government Communications can responsibly harness this exciting new technology, for the benefit of the public.  


Definition of generative AI technology

Generative AI is a specialised form of AI that can interpret and generate high-quality outputs including text and images; opening up the potential for opportunities for organisations, including delivering efficiency savings or developing new language capability.

  • Artificial Intelligence (AI): computer systems able to perform tasks usually requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
  • Generative AI is a type of AI system capable of generating text, images, or other media in response to prompts.

Our principles for responsible adoption of generative AI

Government communications will: 

  • The central GCS team at the Cabinet Office will provide training on responsible use of generative AI to all government communicators, in particular around ensuring accuracy, inclusivity, and mitigating biases. For example, this could be biases against the race, religion, gender, age, of an individual or group. 
  • Require that all our contracted and framework suppliers adhere to this GCS policy on the responsible use of generative AI, and have safeguards in place to ensure this. Ultimately,  our contracted and framework suppliers remain responsible for their use of the technology. 
  • Uphold factuality in our use of generative AI. This means not creating content that misleads, in addition to not altering or changing existing digital content in a way that removes or replaces its original meaning or messaging.
  • Engage with appropriate government organisations, strategic suppliers, technology providers, and civil society, around significant developments in generative AI, and the implications for its use in government communications.
  • Continue to review reputable research into the public’s attitudes towards generative AI and consider how this policy should evolve in response.

Government Communications may use generative AI where it can drive increasingly effective, and engaging government communications, for the public good, in an ethical manner.

For example:

  • Use generative AI to develop communications content. For example:
    • Use generative AI to assess and tailor communications to make them increasingly inclusive, accessible, helpful and relevant. For example, this could include generating automatic subtitles and translations. 
    • Generate first draft text, visuals, or audio, for social media posts or website content, in order to better reach our audiences.
    • Inspire the creative and design process, supporting rapid ideation for designers and content creators. 
    • Adapt existing visual content, such as resizing the aspect ratio of an existing image in order to fit different digital formats or screen sizes.
    • Enhancing the quality and fidelity of existing audio or video content.
  • Use generative AI to quickly apply best practices from industry and GCS standards and frameworks to our work. For example:
  • Use generative AI to explore problems or topics. For example through:
    • Offering diverse perspectives and opening up thinking on a topic.
    • Providing critical analysis of a topic or proposed approach.
    • Identifying previously unconsidered risks and threats associated with a topic.
    • Supporting qualitative and quantitative research and surveys, at greater scale. Potentially through the use of conversational AI tools to encourage more detailed survey responses.

Government communications aims to build trust in our approach through acting transparently.

Therefore we will:

  • Secure written consent from a human before their likeness is replicated using AI for the purposes of delivering government communications. In the limited number of cases we currently expect, a record of this will be made available to the interested public via an official channel, for example, listed on the GCS website. This is to ensure that legitimate government communications can be discerned from deep-fakes or other mis/dis-information. Our aim is to mitigate against any unintended consequences that may come from greater use of AI avatars in government communications. 
  • Clearly notify the public when they are interacting with a conversational AI service rather than a human. This will include to what extent, and for what purposes, an individual’s interactions may be logged or used. For example, using anonymised data on interactions to improve the quality of the service. 
  • Publish a log of changes to this policy to the GCS website. Generative AI is a fast-developing field, and our approach will evolve and adapt to keep in line with emerging technologies, risks, thought leadership, and official government guidance. 

Government communications will not:

  • Use generative AI to deliver communications to the public without human oversight, to uphold accuracy, inclusivity and mitigate biases. Human oversight will either be part of:
    • The production and review stages for content that will remain static. For example this could include press releases, printed posters, and direct mail. In this scenario, human oversight includes the GCS member(s) creating the content, and the GCS lead responsible for the communications activity.
    • The production, testing and evaluation of dynamically generated or interactive communications before they go live. For example this could include chatbots, live conversational services, and services that dynamically generate digital advertising content. In this scenario, human oversight includes the technical team designing and developing the interactive communications, and the GCS lead responsible for the communications activity.
  • Share any private, protected, or sensitive information with third-party AI providers without having appropriate data sharing and security agreements in place.

Annex A

Examples of guidance relevant to the use of generative AI (at the time of writing):

  • Official guidance on using artificial intelligence in the public sector, Generative AI framework for HM Government, and Introduction to AI Assurance. These outline best-practices for the public sector to consider in the following core areas:
    • Importance of using AI to meet user needs
    • Ensuring use of AI is compliant with data protection laws
    • Assessing if AI is the right solution
    • Importance of governance for AI projects
    • Planning and preparing for AI systems implementation
    • Understanding AI ethics and safety
  • Official guidance on Ethics, Transparency and Accountability Framework for Automated Decision-Making. This seven-point framework supports safe, sustainable and ethical use of automated or algorithmic decision-making systems through:
    • Testing to avoid any unintended outcomes or consequences
    • Delivering fair services for all of our users and citizens
    • Being clear who is responsible
    • Handling data safely and protect citizens’ interests
    • Helping users and citizens understand how it impacts them
    • Ensuring that you are compliant with the law
    • Building something that is future proof
  • The Central Digital & Data Office (CDDO) Data Ethics Framework for government. The Framework guides appropriate and responsible data use in government and the wider public sector. It helps public servants understand ethical considerations, address these within their projects, and encourages responsible innovation. It has three overarching principles, and outlines specific actions that should be taken for each:
    • Transparency
    • Accountability
    • Fairness
  • The latest guidance from the Information Commissioner’s Office (ICO) on automated decision-making and profiling. The guidance outlines the considerations of the UK GDPR in this context, including:
    • Emphasising the principle of transparency, encouraging organisations to provide clear and understandable information about automated decision-making processes and the logic behind profiling activities.
    • Underscoring the importance of ensuring fairness in automated decisions and profiling, urging organisations to identify and address any biases or discrimination that may arise. It also emphasises accountability, requiring organisations to take responsibility for the impact of their automated systems.
    • Advising organisations to practise data minimization, collecting only the necessary information for automated decisions or profiling. Additionally, the guidance suggests conducting Privacy Impact Assessments (PIAs) to evaluate and mitigate potential risks associated with automated processes, ensuring compliance with data protection regulations.
  • The Algorithmic Transparency Recording Standard (ATRS) as laid out by the Algorithmic Transparency Recording Standard Hub
  • The Incorporated Society of British Advertisers (ISBA) Advertising Industry Principles for the use of generative AI on creative advertising.
  • Furthermore, government communications will follow and adhere to developments in international copyright law as they emerge.