Government + AI = does not compute
Our government can barely manage relationships between humans. What will happen when confronted with sentience that can match or outpace our own?
Throughout January, Michelle's substack will feature essays on under-reported issues that could impact Canadians in 2023 and beyond. This is the second article in this series.
A month ago, OpenAI released the next iteration of their large language model chatbot, ChatGPT. In a significant advancement in artificial intelligence (AI), ChatGPT can masterfully and independently parse language.
Even though ChatGPT is a significant event in the history of humanity’s direct interface with AI, the role of government in AI’s rapid development and commercial deployment has not been an issue on the political front-burner.
But it should be.
Within a few short weeks, this new technology from OpenAI is already heralded as having the potential to deliver general performance advancements on the scale of electricity and steam engines. It has the potential to disrupt white-collar industries previously deemed to be non-automatable like coding, law, opinion writing, education, and more. And while these advancements could have positive benefits, history has shown that social unrest often follows when new technology rapidly disrupts an economy.
There's no excuse for the government being a laggard in asking itself what its role should be regarding AI.
The past decade has seen highly sophisticated AI deployed in numerous areas. Companies are using AI-powered chatbots to provide customer service. AI-powered image recognition is being used for security purposes and medical image analysis. AI-powered robots are being used in manufacturing and healthcare. And the federal government is using AI to do things like screen immigration applications.
But the emergence of a publicly available AI with the level of sophistication of ChatGPT now raises even bigger questions for the government than ones regarding the impact of AI on the economy.
For example, what will happen when an Artificial General Intelligence (AGI) - or "strong" artificial intelligence that could demonstrate sentience - inevitably emerges?
While experts disagree on the precise timeline for the emergence of AGI, there is an emerging consensus that it is definitely coming, likely in the short-to-medium-term. Despite this, many governments have been virtually silent regarding AGI's development, utilization, and governance. Furthermore, there is no guarantee that existing institutional structures can facilitate governments to achieve these mandates.
Current discourse on the role of government as it pertains to AI tends to be a naïve, rather pessimistic approach that fails to consider broader potential positive and negative impacts. Efforts to engage governments relative to AI have been primarily focused on constructing boundaries on usage by crafting relatively narrow punitive frameworks based on existing institutional structures. Canada's recently introduced Artificial Intelligence and Data Act is an example of this.
This approach presents several problems.
It is a reactive posture that does not match the speed of technological development or commercial deployment, particularly concerning the potential emergence of AGI. This approach also attempts to use outdated modalities - read, a bureaucratic structure designed well before the digital economy - to regulate the function of a new general performative technology. When enacting policies to regulate the digital economy, the government's current efforts seem as ineffectual as using a VCR to play an MP4.
But there is hope.
In theory, governments hold unique levers that could proactively steer the development of AGI into a net positive for humanity. Specifically, governments are more likely to have mandates to fund the type of broad research necessary to transcend the scope limitations of research funded by private capital. Governments can also legislate and enforce parameters around the development and use of emergent technology. And governments have accountability to the public to prevent civil unrest, which is a risk when novel general performative technologies enter widespread use in a short period of time.
However, to do this, governments need to address the numerous challenges they face in using these levers to assist in the development, deployment, and impact on society of AGI.
First, our government systems are yet to successfully see humanity through the social changes that occurred from agricultural to industrial and digital economies. Inequalities still exist and, in many cases, are growing. Changes in social mores have not adapted on pace with these technological changes. AI writ large is adding pressure to this dynamic, with AGI holding the potential to exacerbate it much further.
Second, governments are accustomed to operating within a context that implicitly assumes humanity as the apex of intelligence and worth. Because of this, governments are currently designed to assess other life and technology in terms of their functional utility for humanity. Therefore, they are not intended to consider the impact of sharing the planet with technology or other forms of life that could independently consider humanity's utility towards its own existence.
To simplify this concept with an example, governments have rules for how humans can use fire. It's legal to use fire as a heat source in certain conditions but illegal to use fire to destroy someone else’s house. So how would our governments respond if humans were to make fire sentient and enable it to independently make these decisions based on what it deemed to be in its best interest? Our governments are constructed to function in a context where humans are assumed to hold the apex of mastery. To succeed with AGI, our governments should ask themselves how they will operate in a world where this may no longer be the case.
AGI, even if only viewed as a general performative technology, will require the government to transcend populism, partisanship, and deeply entrenched institutional rigidity to address these issues in a potential technological adoption horizon that could be much shorter and have much more profound impacts that previous technological shifts have brought.
The AI toothpaste is already out of the tube - it is already being commercially deployed and is becoming pervasive in many aspects of society. With AGI on the horizon, and its potential to radically upend careers, industries, and societal structures, AI has become an issue that warrants serious and expedient attention from legislators.
At the very least, 2023 should see our government and Parliament urgently study the ethical and societal implications of AGI and related technologies.
And that’s because with AI, tomorrow is already here.