Language selection

Search CSPS

OpenAI's ChatGPT Explained (DDN2-A12)

Description

This article explores OpenAI's ChatGPT system, its capabilities as an AI language model and its present-day limitations.

Published: March 15, 2023 (Updated: July 9, 2025)
Type: Article
Contributor: Alyea Cyr Connell


AI Chatbot sitting on a chair, working and chatting on a laptop.

OpenAI's ChatGPT Explained

This article was originally published in March 2023 shortly after ChatGPT was released to the public and was quickly becoming one of the most-visited websites on the internet. It has since been updated with more recent improvements and use cases of ChatGPT in the Government of Canada.

If you don't already use ChatGPT on a regular basis, chances are you've probably heard of it. What was once a novel concept has quickly become a personal assistant for many, saving time on mundane tasks like writing emails or summarizing complex documents, and tackling more intricate jobs like coding, analysis, and even graphic design. ChatGPT isn't just another chatbot—it's transforming the way we interact with AI. With continuous updates and its leap into GPT-4, it's clear that this tool is redefining how we think about communication and productivity.

Note that ChatGPT is one of many Large Language Model (LLM)-based Generative AI tools; for a broader look at those, check out Using Large Language Models (like ChatGPT) in the Federal Public Service (DDN2-A25).

The basics

ChatGPT, developed by OpenAI, a California-based AI research organization, is an advanced AI-powered chatbot designed to generate human-like conversations, synthesize information, and produce original content. Initially released to the public on November 30, 2022, it quickly gained attention for its impressive capabilities in natural language understanding and generation.

Since it launched, ChatGPT has seen major improvements with updates that have made it more accurate and flexible. OpenAI has also upgraded its technology, moving from the earlier GPT-3.5 model to the more advanced GPT-4, which better understands context and provides more detailed answers.

In addition to its core conversational abilities, ChatGPT can now be used for a variety of tasks including writing code, generating creative content, assisting with academic research, and even helping with complex decision-making. OpenAI also rolled out features like custom instructions in 2023, which allow users to personalize their interactions, making the chatbot even more tailored to individual preferences.

While ChatGPT is still considered a work in progress, it has evolved significantly and continues to change over time. The model has seen improvements in performance, with faster response times and enhanced capabilities as new versions are developed. As AI technology advances, users can expect ongoing updates and increasingly sophisticated features.

Countless benefits, but red flags remain

The rise of advanced language models like ChatGPT has generated both excitement and concern. These AI tools can automate everything from customer service to content creation, offering immense potential to boost productivity. Yet, this power comes with challenges and risks—especially related to worries of job displacement through automation, dehumanized content, and improper use of intellectual property. Since these models are trained on internet data, they can inadvertently perpetuate biases or simply echo existing patterns.

AI's role in education is another hot topic. Students are now able to generate essays with ease, making it harder for educators to distinguish between human and machine-written work. As a result, concerns about academic integrity as well as risks of over-reliance and de-skilling have surfaced.

Additionally, bias in AI is a persistent issue. Despite efforts to mitigate it, language models can still produce biased or harmful content because they reflect the prejudices found in their training data. As the Harvard Business Review notes, "Bias can creep into algorithms in several ways," often reflecting societal inequities.

Misinformation is also a key concern. While ChatGPT improves over time, it still can't verify the accuracy of its responses. As OpenAI admits, "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers," meaning users must be cautious about blindly trusting its output.

Ultimately, these models don't understand the world like humans—they rely on statistical patterns to predict language. This makes it crucial to monitor and regulate their use carefully, ensuring these powerful tools are used ethically and responsibly.

What's happening in the Government of Canada?

Since generative AI technology became popular in 2022, public servants have been using AI to help complete tasks, from reviewing emails to generating code. Many departments have implemented Microsoft Copilot to assist with decision-making, streamline workflows, and enhance productivity by leveraging AI's capabilities in real time.

Using Generative AI in your daily work

Attention!

When using AI tools, never enter protected, classified, or personal information into public AI platforms. Use them only for unclassified content and follow your workplace's policies or best practices before applying any AI-generated information. If unsure, refer to the Government of Canada's Directive on Automated Decision-Making to ensure responsible and ethical use.

How AI can be used

  • Drafting presentations, outlines, speaking notes, meeting minutes and other written material
  • Editing documents for plain and inclusive language
  • Preparing draft translations of internal documents
  • Doing initial research and generating a list of sources to consult
  • Brainstorming for creative ideas
  • Providing support for personalized learning
  • Summarizing and analyzing documents, articles and meeting transcripts
  • Writing computer code
  • Creating images for presentations

What it cannot be used for

  • Generating inappropriate, harmful, illegal or unethical information
  • Legal and policy advice
  • Fact‑checking
  • The only source of information for important business decisions
  • Creating images of people
  • Creating material that will deceive people or spread misinformation
  • Processing client cases on public AI platforms

In Canada, the Directive on Automated Decision-Making, which came into effect on April 1, 2019, continues to guide the government's use of AI technologies. This directive focuses on ensuring transparency, accountability, and fairness in how AI is integrated into administrative decision-making and public service delivery. It applies to automated decision systems that affect individuals or businesses outside the government, such as determining eligibility for benefits or deciding who may be subject to an audit.

The Directive applies to automated decision systems used for decisions that impact the legal rights, privileges or interests of individuals or businesses outside of the government—for example, the eligibility to receive benefits, or who will be the subject of an audit (Responsible use of artificial intelligence in government).

In addition to the Directive, the Government of Canada introduced the Guide on the Use of Generative AI in September 2023 to help federal institutions navigate the use of tools like ChatGPT. The guide ensures that these tools are used in line with ethical standards, while addressing concerns around privacy, security, intellectual property, and human rights. It emphasizes the importance of responsible AI development, transparency in usage, and maintaining public trust by ensuring that generative AI tools are deployed fairly and ethically.

Many government departments are now exploring the development of their own internal generative AI tools such as open-source large language models. These smaller models could be hosted securely in government clouds or even on individual desktops, addressing key concerns around privacy and security. By creating internal solutions, federal public servants can work with protected and sensitive information—something not possible with external models like ChatGPT, due to stringent security restrictions. This approach could provide a safer, more controlled environment for leveraging AI while safeguarding critical data.

While the Government of Canada has made strides in guiding AI integration within administrative frameworks, a more complex issue arises when considering the government's role as legislator and regulator. Globally, initiatives like the European Union's Artificial Intelligence Act and other responsible AI frameworks are being developed, reflecting a growing push for comprehensive regulations in AI deployment. As generative AI tools become more widespread, there's mounting pressure on governments to act decisively—striking a balance between fostering innovation and ensuring that these technologies are used ethically and responsibly.

Why should you care?

ChatGPT and similar AI technologies are becoming an integral part of our lives, and they're here to stay. Whether we realize it or not, these tools are already shaping how we work, communicate, and make decisions. As AI continues to advance and its adoption spreads, staying informed and adaptable will be key to thriving in this evolving landscape.

In the Government of Canada, initiatives like Canada's Digital Ambition, the Digital Standards, the Government of Canada Digital Competencies, and the AI Strategy for the Federal Public Service provide valuable frameworks to help people and organizations navigate the digital shift. By embracing continuous learning and developing new skills, we can not only stay ahead of the curve but also improve how we serve others.

While the future is uncertain, one thing is clear: being prepared is essential. Take the time to explore available resources, deepen your understanding of AI, and find innovative, ethical ways to incorporate these tools into your work. Just remember—be mindful of privacy concerns and never share protected or sensitive data in online tools.

Resources


Date modified: