Language selection

Search

Understanding AI Safety in Government (DDN1-A03)

Description

This article explores artificial intelligence (AI) safety and suggests practical actions that can be taken to ensure the responsible use of AI systems.

Published: November 6, 2024
Type: Article
Contributor: Kasia Polanska



Understanding AI Safety in Government

Artificial intelligence (AI) is becoming integral to many government operations worldwide. In Canada, examples of AI applications in government include process enhancements, such as Immigration, Refugees, and Citizenship Canada using advanced analytics to triage routine temporary visa applications, as well as scientific implementations, like the use of AI in weather and climate modelling at Environment and Climate Change Canada. There are many long-standing AI implementations in place, but we're also seeing an increasing number of new AI initiatives. This is a result of recent technological advances and AI mainstreaming that is increasing the focus on the use of AI in government.

In response, the Government of Canada has proactively issued comprehensive policy guidance to ensure the responsible deployment of AI in its operations. Examples include:

  • Guiding principles for the use of AI in government: A framework outlining ethical and responsible practices for integrating AI technologies into government operations to ensure transparency, accountability, and public trust.
  • Guide on the use of generative artificial intelligence: A comprehensive manual detailing best practices, risks and ethical considerations for using generative AI in government services and decision-making processes.
  • Directive on Automated Decision-Making: A policy that sets standards and requirements for the responsible implementation of automated decision-making systems within government agencies.
  • Algorithmic Impact Assessment Tool: An assessment tool designed to evaluate the risks, benefits and potential impacts of using algorithms in government processes to ensure they align with ethical and legal standards.
  • A section covering automated decision-making in the Guideline on Service and Digital that provides detailed instructions on implementing, overseeing and managing automated decision-making systems in government services.

While existing policies establish a framework for AI implementation and use, they tend to provide a policy direction suitable for government audiences with varying technical expertise and decision-making authority. These documents can be challenging for AI users who are not responsible for procurement or management but who still want to safely use AI.

This article is intended to provide government employees with a foundational understanding of AI safety and offer practical actions and steps to ensure the safe use of AI systems.

Why AI safety?

AI safety refers to strategies and practices designed to ensure that AI systems perform safely, ethically and as intended. The primary goal is to prevent risks and harm to individuals and society while maintaining transparency and accountability in AI operations.

The risks associated with AI are numerous, but not all are relevant in every context. They range from immediate concerns like malfunctions, privacy and discrimination to long-term potential issues related to existential risks of AI, such as artificial general intelligence (AGI), superintelligent systems that could exceed human control and threaten humanity. Although some commentators emphasize the advanced or existential risks of AI deployment, these risks are not considered immediate. Predictions of AI evolving to a point where it challenges human existence remain speculative and contested.

Rather than focusing on these distant possibilities, it is more pressing to address the tangible risks and harms emerging from government use of AI today. Responding to these present-day threats will ensure that AI serves the public good and operates within ethical and legal boundaries.

AI risks in government deployments

This section identifies the most relevant risks to safety that should be mitigated in government AI deployments. The examples below are illustrative and do not reflect issues with current services. For a basic explanation of how AI works and the potential risks it can pose, please refer to Demystifying Artificial Intelligence (DDN2-A14).

While AI is not a single, uniform technology, certain common characteristics contribute to its risks. Fundamentally, AI systems process data and learn from it to generate actionable outputs. Problems with data quality, input prompts or improper use can lead to unsafe or unintended outcomes. This contrasts with traditional software, which typically returns an error message when encountering an issue, rather than producing flawed results.

  • Malfunctions: Errors can arise from misuse or technical faults within AI systems. For instance, a weather prediction system can inaccurately process atmospheric data, leading to incorrect storm forecasts, which can cause significant public safety risks.
  • Overreliance: A tendency to trust AI outputs excessively without sufficient scrutiny can lead to oversight failures and dependency risks. For example, using an AI system to automate the processing of social security claims without sufficient review could lead to wrongful denials.
  • Lack of alignment: Misalignment between AI outputs and governmental policies or societal values could erode public confidence and lead to ineffective or harmful policy implementations. For example, a program allocating educational grants that was not aligned with government policies could disadvantage certain populations.
  • Lack of transparency: AI systems often rely on complex algorithms that may not be fully understood by the public or even the government officials using them. For example, a system designed to determine eligibility for public benefits may generate decisions without clearly explaining the underlying rationale, leaving both citizens and public servants unable to fully understand or challenge the outcome.
  • Discrimination: AI systems can inadvertently encode and perpetuate existing societal biases, leading to discriminatory outcomes. For instance, an AI recruitment tool may develop biases based on the non-diverse historical data of past recruitments, potentially discriminating against women or minority candidates.
  • Invasions of privacy: AI may process personal data in ways that are not anticipated or desired. For instance, an AI system that tracks public health trends could collect detailed personal data without adequate anonymization, resulting in privacy breaches.
  • Copyright infringement: AI's capacity to use extensive datasets can sometimes include copyrighted material without proper authorization, risking legal issues and ethical breaches. For example, an AI tool that digitizes and archives public records could use copyrighted material without authorization, leading to legal challenges.
  • Disinformation: The potential for AI to generate or spread false information can have severe consequences on public opinion and safety. For instance, an AI-powered platform used by a federal agency to disseminate public safety information could be hacked, spreading false information about an emergency.

Safety risks are not hypothetical; there are real-world examples of each of the above risks in AI systems implemented by both public and private sector organizations. These risks can vary significantly across different AI implementations. Without proper governance and responsible risk mitigation during deployment, these risks can be amplified and lead to unintended consequences.

AI safety is not merely a technical issue

The possibility of AI systems causing harm stems from a combination of technical and non-technical issues, presenting a challenge that cannot be addressed by technical solutions alone. Although safeguards against many potential risks are integrated into AI systems during the design, training and testing phases, users still play a crucial role in ensuring AI safety. Additionally, developing robust safety protocols and governance are essential to strengthening AI deployment within organizations.

Practical actions for users

As a user of an AI system in government, your direct influence on safety protocols or governance might be limited, but there are several actions that you can take to ensure AI safety:

  • Take available training (check below for a list of CSPS learning opportunities) to understand the appropriate use of AI systems. For instance, Large Language Models, which are promoted as a general-purpose technology, are good for the core task of producing text-based content, the intended use, but less so for math. Most systems built for and by the government are designed for specific tasks and can only handle a single or limited request. The misuse of AI systems can cause malfunctions. For example, a chat created to provide information about Agriculture and Agri-Food Canada programs (see AgPal) might produce hallucinations1 when asked about government policies.
  • Use AI tools responsibly, adhering to ethical guidelines and legal standards, especially in higher-risk uses and high-stakes environments. Start by referring to the Guiding principles for the use of AI in government and the Guide on the use of generative AI.
  • Ensure you apply system updates promptly. Updates can fix known bugs, improve security, and enhance functionality, critical to maintaining system reliability.
  • Be aware of data privacy implications and understand what data AI collects and how it is used. Do not upload or create private, sensitive or secret data or information with AI tools unless the AI system is specifically designed to handle such data securely.
  • Review and verify the information output generated by AI for accuracy and consistency.
  • Flag inappropriate or erroneous output. Inquire whether your organization has safety protocols for flagging output for AI systems used internally. If not, make sure you document issues and share them with your supervisor.

Definitions

  • Hallucinations: AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.

Learn more

If you work in government and are new to AI safety, it is important to begin by educating yourself about both AI's potential and its pitfalls. Engaging with experts in AI safety, participating in training programs, and staying updated on the latest research are all critical steps in developing a sound understanding of AI safety issues.

If you are looking to learn more about AI in the Government of Canada, consider the following learning resources:


Date modified: