California governor vetoes AI safety bill

California governor vetoes AI safety bill

A California bill seeks to regulate the development of AI models, though critics say the measure can threaten innovation in the nascent field
A California bill seeks to regulate the development of AI models, though critics say the measure can threaten innovation in the nascent field. Photo: Kirill KUDRYAVTSEV / AFP/File
Source: AFP

Don't miss out! Get your daily dose of sports news straight to your phone. Join YEN's Sports News channel on WhatsApp now!

California Governor Gavin Newsom has vetoed a bill aimed at regulating powerful artificial intelligence models following pushback from tech giants and critics who argued the law went too far.

The bill had faced a barrage of critics, including members of US Congress from Newsom's Democratic party, who argued that threats of punitive measures against developers in a nascent field would throttle innovation.

In a statement on Sunday, Newsom acknowledged that SB-1047 was "well-intentioned" but expressed concern that the bill was too "stringent" and unfairly focused on "the most expensive and large-scale models."

"The bill applies stringent standards to even the most basic functions -- so long as a large system deploys it," the governor noted.

He added, "smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 -- at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."

Read also

'Misinformation megaphone': Musk stokes tension before US election

The bill's sponsor, Democratic state Senator Scott Wiener of San Francisco, lamented the "setback," saying it left AI safety in the hands of the tech giants racing to release the technology.

Wiener had hoped the bill would set rules for AI giants in Silicon Valley's home state, filling a void left by Washington, where a politically divided Congress struggles to pass legislation.

"This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way," Wiener wrote on X.

The California state bill would have required developers of large "frontier" AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cybersecurity safeguards, and providing protection for whistleblowers.

Read also

Ireland fines Meta 91 mn euros over EU data breach

To secure the legislation's passage, lawmakers made several changes, including replacing criminal penalties for violations with civil penalties such as fines.

However, opposition remained, including from influential figures like Democratic Congresswoman Nancy Pelosi.

OpenAI, the creator of ChatGPT, also opposed the bill, preferring national rules instead of a patchwork of AI regulations across the 50 US states.

At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures.

The bill had gained reluctant support from Elon Musk, who argued that AI's risk to the public justifies regulation, as well as leading AI researchers like Geoffrey Hinton and Yoshua Bengio.

Dan Hendrycks, director of the Center for AI Safety, said that although the veto was "disappointing," the debate around the bill "has begun moving the conversation about AI safety into the mainstream, where it belongs."

Read also

'Broken' news industry faces uncertain future

He added on X that the bill has "revealed that some industry calls for responsible AI are nothing more than PR aircover for their business and investment strategies."

New feature: Сheck out news that is picked for YOU ➡️ click on “Recommended for you” and enjoy!

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.