Should moving fast and breaking things include moral rights? #ScarJo
Would you care if you were used to voice an AI without your consent?
This week, a scandal erupted when serious allegations made by prominent actress Scarlett Johansson against OpenAI CEO Sam Altman emerged on social media.
In a detailed statement, Ms. Johansson alleged that Mr. Altman had contacted her to use her voice for OpenAI's "Sky" chatbot, partly because of her role as the voice of an Artificial Intelligence character in the 2013 movie "Her." The actress's statement went on to detail that while she had declined the offer, Mr. Altman may have used her voice for Sky without her consent, going as far as to allegedly flaunt the fact with a single (now deleted) tweet that simply said "her."
Mr. Altman denies these allegations.
If you think this issue doesn't affect you, think again. What if an exact, unmistakable digital likeness of your voice, or a deepfake video of you, was used to scam someone else, to promote hate speech, or to take a position on an issue that is counter to your beliefs? Worse still, what if your exact, unmistakable digital likeness and voice were used to depict you being sexually assaulted?
When I wrote a piece about the potential for this exact scenario to happen a year ago, I knew that the government would need to quickly catch up with the speed by which technology firms like OpenAI were breaking things. For the uninitiated, "move fast and break things" is an axiom popularized by Meta's Mark Zuckerberg and is a widely held ethos in the Silicon Valley set. Roughly translated, it means move faster than your competitor, move faster than society, disrupt things, and your company will make money.
In some scenarios, this approach works. Certain types of outdated systems hold society back; disruption without regulation can bring both profits and social good. But Johansson's allegations against Mr. Altman is not one of those instances, for several reasons.
First is the crass but obvious impact on Mr. Altman’s company. While some tech investors and founders may argue that this move gave OpenAI a lot of free press, to legislators like me and other investors it was an own-goal of epic proportions that will likely have long term negative impacts on the company and potentially the industry writ-large. The development of AI is under scrutiny from regulators at every level of government in every part of the world. Regulations are coming - and the only question that remains is whether they will be balanced enough to protect the public while allowing for innovation and investment into technologies with obvious potential for social good.
The answer to that question largely relies upon prominent tech leaders like Mr. Altman proving through their actions that they can exercise enough sound judgment and ethics to be considered good corporate stewards. The allegations from Mr. Johansson against Mr. Altman suggest a level of arrogance and governance ineptitude that makes this impossible in his case. And, the actress’s allegations come on top of a myriad of other truly bizarre shit coming out of the OpenAI team over the last few months. Even as a free-market supportive legislator, the Johansson allegations (if the alleged use of near-slave labour, alleged weird corporate sex parties and quasi-Altman ouster weren’t enough) lead me to believe a net should probably be thrown over these guys before they unleash things that negatively impact every person I represent.
That's because the issue at the heart of the alleged Johansson voice theft is one of violation. If her allegations are true, Ms. Johansson could not negotiate compensation for using her property. She could not restrict what her voice could or could not be used for. If her allegations are true, her voice could be used to voice any number of things that are antithetical to her beliefs without her having any capacity to say no.
At a minimum, Ms. Johannson’s allegations could constitute a violation of the actress's moral rights. This is legal concept often related to copyrighted works ensures that someone's creative output can't be distorted in a way that harms their reputation. In addition to the issue of moral rights is the alleged non-consensual use of copyrighted works, which is a problem for book publishers, authors and writers who have allegedly had their works scraped by companies like OpenAI to train their AI systems (if you don't know anything about scraping, read the article referenced in this link). While lawsuits are pending for both these issues (the New York Times is currently pressing suit against OpenAI for alleged copyright infringement, and a group of voice actors is suing a different company for alleged voice theft), there are grey areas in the law that need to be made clear before more problems arise.
Tech companies and AI accelerationists will argue that the government should be fully out of the way of research, deployment, and use of AI systems due to their supposed potential to solve the world's problems. But many other experts—and victims—have argued that innovation can happen while simultaneously putting guardrails in place to protect the public from obvious areas of harm that are already regulated or illegal. I agree.
For example, simple definition clarifications within criminal codes or copyright law can be made to give enough clarity on what does or doesn't constitute fair use for copyrighted works in the development of AI systems or that something like an intimate image created with deepfake technology should fall under existing laws regarding the same. Canada's publishing industry has been advocating for these types of changes.
The government should listen, move fast and prevent people from being broken.