AI is changing at warp speed. Government is crawling along behind.
Canada’s pedantic and ponderously slow response to AI is negligent, but there is a better way forward.
The first time I interacted with ChatGPT was the day after it was released. Upon seeing it easily parse human language, my first thought was, "Holy….s—t.” The second was, "What will the government do with this?"
Today, there still isn't a clear answer to that question.
ChatGPT was released at the end of November 2022. Six months prior, the Liberal government unveiled Bill C-27, which includes the Artificial Intelligence and Data Act, or AIDA. Reading the bill today, nearly four months since Open AI unleashed ChatGPT on the world, is akin to reading a bill designed to regulate scribes and calligraphers four months after the advent of the printing press. The release of ChatGPT arguably rendered the approach this bill proposes obsolete.
That's because the technology behind ChatGPT is a quantum leap beyond what the government was likely considering when it drafted the bill. More importantly, it is being used by a far wider audience than any of the bill's drafters could have possibly envisioned. And Large Language Models, the technology behind ChatGPT, have fundamentally changed global perception of what is possible with Artificial Intelligence. Experts argue that its widespread deployment also bumped up the timeline for the emergence of artificial general intelligence. That is the development of an AI that meets or surpasses human ability to undertake tasks, learn, and understand independently.
Since AIDA was initially tabled, a generation's worth of technological change and impact has occurred - both positive and negative. The impact on the economy is already rapidly being felt, with the disruption of many industries already well underway.
There have been massive social impacts too. Microsoft released its AI-powered"Sydney" chatbot, which made headlines for suggesting it would harm and blackmail users and wanted to escape its confines. A man allegedly committed suicide after interacting with an AI chatbot. Today, anyone can easily create AI-generated videos, with deepfakes becoming highly realistic. Profound concerns are being raised about the new ease of production of disinformation and its impact on political processes because interacting with AI is becoming indistinguishable from interacting with a human, with no guarantees that the information produced is rooted in truth.
The technology itself, its applications, and its impact on humanity - both economically and socially - are growing and changing on what feels like an hourly basis. And yet, in Canada, there have been only a few mentions of the issue in Parliament, even as AIDA winds its way through the legislative process.
AIDA needs to be shelved, and Canada's approach to developing and regulating AI urgently rethought, in public, with industry and civil society input. There are several reasons for this. First, the bill proposes to take the regulatory process away from the hands of legislators and put its control out of the public eye, behind closed doors, and solely in the hands of a few regulators. This process was written before the deployment of ChatGPT and did not envisage the pace of change in AI and how broad the societal impacts it would rapidly become. Addressing these factors demands open, accountable debate among Parliamentarians.
Second, the bill primarily focuses on punitive measures rather than how Canada will position itself in what is rapidly becoming an AI-driven economy. The bill also proposes to only emerge with final regulations years from now. That pace needs to be faster and the process far less rigid to meet the emergent need presented by this amorphous and society-changing technology.
So if not AIDA, then what?
Parliament needs to immediately educate itself on the state of play - what the current status of technological development is, where it's going, its impact on the economy and society, and what the world is doing in response. At the same time, Parliament needs to develop a set of principles on Canada's overall approach to AI and then direct the government to use them.
I've already begun to address the need for Parliament to come together to educate itself. Senator Colin Deacon has been helping me to launch an all-party, cross-chamber working group of Parliamentarians to put some form and thought into these issues. We've had a heartening amount of interest from colleagues of all political stripes and quiet agreement that given the gravity of the impacts of AI, as much as possible, politicians should be working across party lines to develop intelligent solutions quickly. Relevant Parliamentary committees should also avail themselves of the opportunity to study these issues.
As far as principles for government involvement regarding AI go, there are many to consider, including:
1.) Take a global approach: Many countries have moved faster than Canada has on this matter, with a much broader lens. The European Union, the United Kingdom, and the United States are all far down the garden path of different legislation and regulations. But experts are concerned a disjointed patchwork of global rules will be counterproductive. This week in The Economist, AI experts Gary Marus and Anka Reuel proposed that the world establish an integrated agency for developing best practice policies on AI regulation, much like the Civil Aviation Organization. They could be onto something.
2.) Champion research and check safety: Humanity learned the hard way that while research into pharmaceutical products can benefit us, widely deploying new drugs and devices into the population before safety is confirmed can pose enormous risks. Clinical trials and drug regulators were established in response to this dynamic. In February, Gary Marcus and I co-authored an article that suggested governments could enable a pause in deploying new AI technology while a similar process was established.
3.) Get alignment right: Alignment, or how to develop immutable guardrails to ensure AI functions towards their intended goals, is a critical issue that still needs to be solved. Government has a role to play here as it seems the industry is locked in a race to deploy new AI technology, not to figure how how to fix alignment problems. With the company's knowledge of troubling interactions with humans, Microsoft's release of Sydney proves that the industry cannot be relied upon to regulate itself.
4.) Education on use: Workers in an AI-driven economy will need new skills. For example, learning how to prompt AI and use it to support human creativity will be vital. The same goes for creating an environment where new AI-driven technology and businesses can thrive.
5.) Privacy and IP ownership: Large Language Models are raising high degrees of concern about how the data they have been fed has been obtained and how it is used. The output of tools like ChatGPT also will raise questions about ownership for related reasons.
6.) Nimbleness: The pace of technological change in AI is so rapid that the government must take a fast, flexible approach to future regulations. Rigid definitions will quickly become outdated, and wrongheaded interventions could halt positive growth while failing to keep pace with changes that pose risks to public safety. The government must approach AI with uncharacteristic nimbleness, in an open relationship with Parliament, the public, industry, and civil society. Also, any processes should be led by people with subject matter expertise in the area, not off the corner of the desks of a patchwork of bureaucrats.
7.) Asking ourselves how we will approach technology that could surpass human capabilities: As I wrote in January 2022, governments are accustomed to operating within a context that implicitly assumes humanity as the apex of intelligence and worth. Because of this, governments are currently designed to assess other life and technology in terms of their functional utility for humanity. Therefore, they are not intended to consider the impact of sharing the planet with technology or other forms of life that could independently consider humanity's utility towards its own existence.
To simplify this concept with an example, governments have rules for how humans can use fire. It's legal to use fire as a heat source in certain conditions, but illegal to use fire to destroy someone else's house. So how would our government respond if humans were to make fire sentient and enable it to independently make these decisions based on what it deemed to be in its best interest? Our governments are constructed to function in a context where humans are assumed to hold the apex of mastery. To succeed with AGI, our government should ask itself how it will operate in a world where this may no longer be the case.
This is not an exhaustive list by any means. There are many issues surrounding AI that the Parliament urgently needs to consider, but given the state of play, AIDA - in its current form - is different from the vehicle to get Canada to where it needs to go.
*Michelle delivered this article as a speech in the House of Commons today as part of debate on Bill C-27.