Monday, October 13, 2025

Scott Wiener on his combat to make Huge Tech disclose AI’s risks

This isn’t California state Senator Scott Wiener’s first try at addressing the risks of AI.

In 2024, Silicon Valley mounted a fierce marketing campaign in opposition to his controversial AI security invoice, SB 1047, which might have made tech firms answerable for the potential harms of their AI programs. Tech leaders warned that it will stifle America’s AI growth. Governor Gavin Newsom in the end vetoed the invoice, echoing comparable considerations, and a well-liked AI hacker home promptly threw a “SB 1047 Veto Occasion.” One attendee advised me, “Thank god, AI remains to be authorized.”

Now Wiener has returned with a brand new AI security invoice, SB 53, which sits on Governor Newsom’s desk awaiting his signature or veto someday within the subsequent few weeks. This time round, the invoice is way more fashionable or at the least, Silicon Valley doesn’t appear to be at struggle with it.

Anthropic outright endorsed SB 53 earlier this month. Meta spokesperson Jim Cullinan tells TechCrunch that the corporate helps AI regulation that balances guardrails with innovation and says “SB 53 is a step in that route,” although there are areas for enchancment.

Former White Home AI coverage advisor Dean Ball tells TechCrunch that SB 53 is a “victory for cheap voices,” and thinks there’s a powerful likelihood Governor Newsom indicators it.

If signed, SB 53 would impose a few of the nation’s first security reporting necessities on AI giants like OpenAI, Anthropic, xAI, and Google — firms that immediately face no obligation to disclose how they take a look at their AI programs. Many AI labs voluntarily publish security stories explaining how their AI fashions may very well be used to create bioweapons and different risks, however they do that at will and so they’re not all the time constant.

The invoice requires main AI labs — particularly these making greater than $500 million in income — to publish security stories for his or her most succesful AI fashions. Very similar to SB 1047, the invoice particularly focuses on the worst sorts of AI dangers: their capability to contribute to human deaths, cyberattacks, and chemical weapons. Governor Newsom is contemplating a number of different payments that deal with different sorts of AI dangers, equivalent to engagement-optimization strategies in AI companions.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

SB 53 additionally creates protected channels for workers working at AI labs to report security considerations to authorities officers, and establishes a state-operated cloud computing cluster, CalCompute, to supply AI analysis sources past the massive tech firms.

One motive SB 53 could also be extra fashionable than SB 1047 is that it’s much less extreme. SB 1047 additionally would have made AI firms answerable for any harms brought on by their AI fashions, whereas SB 53 focuses extra on requiring self-reporting and transparency. SB 53 additionally narrowly applies to the world’s largest tech firms, moderately than startups.

However many within the tech business nonetheless imagine states ought to depart AI regulation as much as the federal authorities. In a current letter to Governor Newsom, OpenAI argued that AI labs ought to solely need to adjust to federal requirements — which is a humorous factor to say to a state governor. The enterprise agency Andreessen Horowitz wrote a current weblog put up vaguely suggesting that some payments in California might violate the Structure’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.

Senator Wiener addresses these considerations: he lacks religion within the federal authorities to cross significant AI security regulation, so states have to step up. Actually, Wiener thinks the Trump administration has been captured by the tech business, and that current federal efforts to dam all state AI legal guidelines are a type of Trump “rewarding his funders.”

The Trump administration has made a notable shift away from the Biden administration’s deal with AI security, changing it with an emphasis on development. Shortly after taking workplace, Vice President J.D. Vance appeared at an AI convention in Paris and mentioned: “I’m not right here this morning to speak about AI security, which was the title of the convention a few years in the past. I’m right here to speak about AI alternative.”

Silicon Valley has applauded this shift, exemplified by Trump’s AI Motion Plan, which eliminated boundaries to constructing out the infrastructure wanted to coach and serve AI fashions. Right this moment, Huge Tech CEOs are often seen eating on the White Home or saying hundred-billion-dollar knowledge facilities alongside President Trump.

Senator Wiener thinks it’s essential for California to guide the nation on AI security, however with out choking off innovation.

I lately interviewed Senator Wiener to debate his years on the negotiating desk with Silicon Valley and why he’s so targeted on AI security payments. Our dialog has been edited flippantly for readability and brevity. My questions are in daring, and his solutions will not be.

Maxwell Zeff: Senator Wiener, I interviewed you when SB 1047 was sitting on Governor Newsom’s desk. Discuss to me concerning the journey you’ve been on to manage AI security in the previous few years.

Scott Wiener: It’s been a curler coaster, an unbelievable studying expertise, and simply actually rewarding. We’ve been capable of assist elevate this subject (of AI security), not simply in California, however within the nationwide and worldwide discourse.

We now have this extremely highly effective new know-how that’s altering the world. How will we be sure that it advantages humanity in a means the place we cut back the danger? How will we promote innovation, whereas additionally being very aware of public well being and public security. It’s an necessary — and in some methods, existential — dialog concerning the future. SB 1047, and now SB 53, have helped to foster that dialog about protected innovation.

Within the final 20 years of know-how, what have you ever realized concerning the significance of legal guidelines that may maintain Silicon Valley to account?

I’m the man who represents San Francisco, the beating coronary heart of AI innovation. I’m instantly north of Silicon Valley itself, so we’re proper right here in the midst of all of it. However we’ve additionally seen how the big tech firms — a few of the wealthiest firms in world historical past — have been capable of cease federal regulation.

Each time I see tech CEOs having dinner on the White Home with the aspiring fascist dictator, I’ve to take a deep breath. These are all actually good individuals who have generated huge wealth. Lots of of us I signify work for them. It actually pains me once I see the offers which can be being struck with Saudi Arabia and the United Arab Emirates, and the way that cash will get funneled into Trump’s meme coin. It causes me deep concern.

I’m not somebody who’s anti-tech. I would like tech innovation to occur. It’s extremely necessary. However that is an business that we must always not belief to manage itself or make voluntary commitments. And that’s not casting aspersions on anybody. That is capitalism, and it may well create huge prosperity but in addition trigger hurt if there will not be wise rules to guard the general public curiosity. In terms of AI security, we’re making an attempt to string that needle.

SB 53 is targeted on the worst harms that AI might imaginably trigger — loss of life, huge cyber assaults, and the creation of bioweapons. Why focus there?

The dangers of AI are different. There may be algorithmic discrimination, job loss, deep fakes, and scams. There have been varied payments in California and elsewhere to deal with these dangers. SB 53 was by no means meant to cowl the sector and deal with each danger created by AI. We’re targeted on one particular class of danger, by way of catastrophic danger.

That subject got here to me organically from of us within the AI area in San Francisco — startup founders, frontline AI technologists, and people who find themselves constructing these fashions. They got here to me and mentioned, ‘This is a matter that must be addressed in a considerate means.’

Do you are feeling that AI programs are inherently unsafe, or have the potential to trigger loss of life and large cyberattacks?

I don’t assume they’re inherently protected. I do know there are lots of people working in these labs who care very deeply about making an attempt to mitigate danger. And once more, it’s not about eliminating danger. Life is about danger, except you’re going to reside in your basement and by no means depart, you’re going to have danger in your life. Even in your basement, the ceiling would possibly fall down.

Is there a danger that some AI fashions may very well be used to do vital hurt to society? Sure, and we all know there are individuals who would love to do this. We should always attempt to make it tougher for unhealthy actors to trigger these extreme harms, and so ought to the individuals creating these fashions.

Anthropic issued its help for SB 53. What are your conversations like with different business gamers?

We’ve talked to everybody: massive firms, small startups, traders, and lecturers. Anthropic has been actually constructive. Final 12 months, they by no means formally supported (SB 1047) however they’d constructive issues to say about elements of the invoice. I don’t assume (Anthropic} loves each side of SB 53, however I believe they concluded that on stability the invoice was price supporting.

I’ve had conversations with massive AI labs who will not be supporting the invoice, however will not be at struggle with it in the way in which they have been with SB 1047. It’s not shocking. SB 1047 was extra of a legal responsibility invoice, SB 53 is extra of a transparency invoice. Startups have been much less engaged this 12 months as a result of the invoice actually focuses on the biggest firms.

Do you are feeling strain from the big AI PACs which have shaped in current months?

That is one other symptom of Residents United. The wealthiest firms on the earth can simply pour infinite sources into these PACs to attempt to intimidate elected officers. Underneath the principles we have now, they’ve each proper to do this. It’s by no means actually impacted how I strategy coverage. There have been teams making an attempt to destroy me for so long as I’ve been in elected workplace. Varied teams have spent thousands and thousands making an attempt to blow me up, and right here I’m. I’m on this to do proper by my constituents and attempt to make my neighborhood, San Francisco, and the world a greater place.

What’s your message to Governor Newsom as he’s debating whether or not to signal or veto this invoice?

My message is that we heard you. You vetoed SB 1047 and offered a really complete and considerate veto message. You correctly convened a working group that produced a really sturdy report, and we actually appeared to that report in crafting this invoice. The governor laid out a path, and we adopted that path with a view to come to an settlement, and I hope we obtained there.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles