Last week, Snapchat influencer Caryn Marjorie unleashed a digital doppelganger of herself on the world.
Caryn's Artificial Intelligence driven persona, built by a company called Forever Voices AI, now charges $1 per minute to interact with it, including for engaging in explicit sexual conversation. In its first week since launch, it's raked in nearly $72,000.
In this instance, Caryn, the human upon whom the AI sex chatbot is based, undertook the endeavor at her behest. But what happens if a similar AI version of someone is created without their consent?
Unfortunately, there isn't a clear answer to this question. Governments and courts still need to catch up to the warp-speed pace of AI technological advancement and deployment. But the Caryn AI story highlights that the implications of the legal ambiguity of AIs based on Large Language Model (LLM) technologies like ChatGPT are a real and present concern.
For example, if you died tomorrow, would you want to be resurrected as an AI capable of conversing as you would, in your voice and image? Would our current laws allow you to express, or withhold, your consent in your will? What will the code of ethics be for resurrecting historical figures as AIs (e.g., read this article's title)? Or how about a scenario where your ex-partner, or a stalker, created an AI chatbot version of you and wandered further into a harmful delusion? How would our legal system react to that challenge? The same goes for a scenario where a bully creates a chatbot that replicated your child and deployed it to harass them at school.
All of these questions involve legal definitions in the brave new world of AI that have yet to be tested. These definitions include consent, ownership of the data that LLMs are trained on, and other aspects of intellectual property ownership. When those ambiguities are layered upon our existing justice system, which is often slow to render decisions, tends to favour the side that has the most resources to sustain the expense of lawyers, and has a history of being errant in rendering justice in instances of criminal harassment, it's easy to see the enormous impact this new breed of AI is about to have.
To further underscore this point, it's essential to understand how pervasive these issues have the potential to become. And to do that, it's necessary to know how Caryn's AI was created.
Forever Voices used thousands of hours of Caryn's Youtube videos as a training set for the AI. The result is a near indistinguishable, fully interactive, online AI version of Caryn, capable of parsing language in her tone, with her mannerisms, and with her knowledge set. Said a different way, it's now possible that anyone with any history of text messages, emails, or online content of any sort could have that digital footprint used to sideload themselves into AI format. Add indistinguishable-from-the-real-human audio and video, with no rules of engagement, it's easy to see the societal problems that are about to emerge.
These are just some challenges posed by LLM-based artificial intelligence's large-scale, largely rule-free deployment. So governments have a responsibility to keep pace with the development of these technologies to ensure that their benefits are allowed to flourish while ensuring that the public is protected. And it's for these reasons that Senator Colin Deacon and I have launched the Canadian Parliamentary Caucus on Emerging Technology.
Our goal with this working group is to ensure that Parliamentarians quickly come up to speed with the opportunities and challenges of these new technologies, to understand the lobbying positions of innovators and commercial entities, and most importantly - recognize potential public safety issues. This caucus is a multi-party group designed to be an agile forum for Canadian legislators to do these things and then return to their respective party caucuses armed with solid information from which smart legislative positions can be crafted. And if the stars align, some intelligent, partisan-free consensus could be achieved on how best to proceed.
Admittedly, this isn't an entirely altruistic endeavor. When I first read the Caryn AI story, knowing that I have over a decade of speeches, videos, interviews, and written articles in the public domain, a chill went down my spine. Never mind the issues AI-generated deep fakes could have on our democratic system. I've dealt with stalkers and scammers many times, so it's not hard to wonder if that aspect of my job will get worse given the wild west legal scenario these new technologies could bring.
I'm also aware that if I'm worried about this, sitting in a position of power and privilege, millions of others need protection. So it seems more prudent for legislators to get ahead of the problem rather than reacting after a catastrophe.
We shouldn't need to resurrect Churchill as a chatbot to drive that point home.