ALTMAN ON AI: Regulate Me! Don’t Fall for It | Steve Berman

OpenAI CEO Sam Altman told Congress he’d like to be regulated. I bet he would! Like the Bell System, or Big Pharma, or the American Commercial Barge Line. Protective regulation has the added—if intended—effect of creating a hedge around the targeted industries. It freezes in place many of the positional factors, ossifying the market and turning star businesses into cash cows.

“I think people are able to adapt quite quickly when Photoshop came onto the scene a long time ago,” Altman told Sen. John Hawley. “People were quite fooled by Photoshopped images and pretty quickly understand that images might be Photoshopped. This is like that but on steroids.”

“The interactivity, the ability to really model predict humans as you talked about, I think is going to require a combination of companies doing the right thing, regulation, and public education.”

Let’s eliminate the first item: companies doing the right thing. Despite a call by practically every serious researcher and scientist dealing with modern AI for a “pause” in new deployments, on May 10 Google released its newest updates to Bard, the companies beleaguered AI product. A key paragraph in the open letter by the Future of Life Institute read:

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Not every researcher and expert fully agrees with every facet of the letter, or the institute’s other causes, but they all agree on the pause. Congress could legislate such a pause, but first lawmakers have to understand what the technology really is, and what it means, and what they’d be pausing. AI is all over the place, handling important things like process control for pharmaceutical manufacturingbehavioral-based cyber-security, and Elon Musk’s self-driving Teslas (Musk signed the letter).

We don’t want to pause all AI, but we might consider pausing publicly-accessible, natural language AI that can spit out volumes of human-friendly text, generate thousands of photorealistic images, and create deepfake videos of everything from a political speech to revenge porn. In fact, the whole deepfake video and revenge porn-as-a-service marketplace should be banned from existence for the sake of society, all but the frothiest rabid libertarians would agree.

So it’s not the AI itself that’s an issue. GPT-3 and GPT-4 were in development for years, available to a select group of testers and researchers, before they went “public.” the problem is the “public” part. But how would Congress write a law to force a genie back into the bottle?

Altman’s idea is based on creating a regulatory fence around AI and requiring something like an electronic food nutrition label, called a “model card.”

Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. (Source: Cornell University, FAT ‘19: Conference on Fairness, Accountability, and Transparency.)

OpenAI was supposed to be a transparent force for AI goodness, not money, but now it’s about the money. Altman wants Congress to tap existing AI companies to help them understand the tech, and craft legislation. Knowing Congress, they’ll probably create a government agency or office for the regulatory part. There’ll be some AI Administration that colludes works with industry to write volumes of regulations dealing with secrecy transparency, profits intellectual property rights, and anti-competition new technology development. Then we’ll be able to sic the feds on any who dare to release AIs into the wild without the imprimatur of the incumbentsregulators.

I disagree with Altman, because this is not that, in terms of Photoshop. Normal people cannot perceive the difference between deepfakes and videos or images created from reality, when the differences are small. (I’m not talking about the images of Trump lifting weights which are obviously fake.) Even if the person faked denies the fake, history will have to contend with two versions, and develop technology to figure out which ones are real based on digital artifacts and complex algorithms. The problem is that those who create the fakes will get better and better at defeating the detectors, because that’s the nature of those who try to fool us. The use of AIs will only accelerate the development of fakes that compete with reality.

“Model cards” are a good idea, but they need to be woven into the technology in a way that makes it a “hard problem” (as they say in cryptography) to fake the real thing. That means every real digital camera, every real image processor, scanner or image manipulation software will need a unique digital fingerprint, an encrypted key like software has certificate protection, and that deepfakes cannot create in an economically feasible manner. That’s an enormous task, and also raises privacy questions, as everything we do will be subject to what’s called in Star Wars a “chain code” authenticating its origin.

It will take a generation to replace everything in a way that AIs cannot solve, and there’s no guarantee that a quantum-computer-powered AI won’t come along to solve them. Of course, quantum computing is the domain of governments and rich companies like Goldman Sachs, so a laptop quantum AI is not going to be a thing anytime soon. We can’t pause public LLM AI for a generation, so maybe regulation is the only option.

But I think Altman is far too generous in his “right thing” assumption, and “on steroids” characterization. AI technology is more akin to nuclear bomb technology than Photoshop or any desktop application running for the benefit of users. Public AI access should more likely be regulated in the way the Department of Energy regulates nuclear weapons, as in “you can’t have it.” No public AI should be allowed unless it is completely safe and vetted at least to the standards the Nuclear Regulatory Commission uses to license reactors.

Instead of “regulate me” maybe we should consider a licensing process that is tight to the point of killing most of the LLM AIs in existence now. Or maybe, the threat of that might be enough to induce companies like Google and Microsoft to do “the right thing.”

Altman made a good pitch for regulatory capture, but we should not fall for it.

Follow Steve on Twitter @stevengberman.

The First TV contributor network is a place for vibrant thought and ideas. Opinions expressed here do not necessarily reflect those of The First or The First TV. We want to foster dialogue, create conversation, and debate ideas. See something you like or don’t like? Reach out to the author or to us at ideas@thefirsttv.com