California
Governor
Gavin
Newsom
vetoed
the
Safe
and
Secure
Innovation
for
Frontier
Artificial
Intelligence
Models
Act
(SB
1047)
today.
In
his
veto
message,
Governor
Newsom
cited
multiple
factors
in
his
decision,
including
the
burden
the
bill
would
have
placed
on
AI
companies,
California’s
lead
in
the
space,
and
a
critique
that
the
bill
may
be
too
broad.
“While
well-intentioned,
SB
1047
does
not
take
into
account
whether
an
AI
system
is
deployed
in
high-risk
environments,
involves
critical
decision-making
or
the
use
of
sensitive
data.
Instead,
the
bill
applies
stringent
standards
to
even
the
most
basic
functions
—
so
long
as
a
large
system
deploys
it.
I
do
not
believe
this
is
the
best
approach
to
protecting
the
public
from
real
threats
posed
by
the
technology.”
Newsom
writes
that
the
bill
could
“give
the
public
a
false
sense
of
security
about
controlling
this
fast-moving
technology.”
“Smaller,
specialized
models
may
emerge
as
equally
or
even
more
dangerous
than
the
models
targeted
by
SB
1047
-
at
the
potential
expense
of
curtailing
the
very
innovation
that
fuels
advancement
in
favor
of
the
public
good.”
The
Governor
says
he
agrees
that
there
should
be
safety
protocols
and
guardrails
in
place,
as
well
as
“clear
and
enforceable”
consequences
for
bad
actors.
However,
he
states
that
he
doesn’t
believe
the
state
should
“settle
for
a
solution
that
is
not
informed
by
an
empirical
trajectory
analysis
of
Al
systems
and
capabilities.”
Here
is
the
full
veto
message:
In
a
post
on
X,
Senator
Scott
Wiener,
the
bill’s
main
author,
called
the
veto
“a
setback
for
everyone
who
believes
in
oversight
of
massive
corporations
that
are
making
critical
decisions”
affecting
public
safety
and
welfare
and
“the
future
of
the
planet.”
“This
veto
leaves
us
with
the
troubling
reality
that
companies
aiming
to
create
an
extremely
powerful
technology
face
no
binding
restrictions
from
U.S.
policymakers,
particularly
given
Congress’s
continuing
paralysis
around
regulating
the
tech
industry
in
any
meaningful
way.”
In
late
August,
SB
1047
arrived
on
Gov.
Newsom’s
desk,
poised
to
become
the
strictest
legal
framework
around
AI
in
the
US,
with
a
deadline
to
either
sign
or
veto
it
as
of
September
30th.
It
would
have
applied
to
covered
AI
companies
doing
business
in
California
with
a
model
that
costs
over
$100
million
to
train
or
over
$10
million
to
fine-tune,
adding
requirements
that
developers
implement
safeguards
like
a
“kill
switch”
and
lay
out
protocols
for
testing
to
reduce
the
chance
of
disastrous
events
like
a
cyberattack
or
a
pandemic.
The
text
also
establishes
protections
for
whistleblowers
to
report
violations
and
enables
the
AG
to
sue
for
damages
caused
by
safety
incidents.
Changes
since
its
introduction
included
removing
proposals
for
a
new
regulatory
agency
and
giving
the
state
attorney
general
power
to
sue
developers
for
potential
incidents
before
they
occur.
Most
companies
covered
by
the
law
pushed
back
against
the
legislation,
though
some
muted
their
criticism
after
those
amendments.
In
a
letter
to
bill
author
Senator
Wiener,
OpenAI
chief
strategy
officer
Jason
Kwon
said
SB
1047
would
slow
progress
and
that
the
federal
government
should
handle
AI
regulation
instead.
Meanwhile,
Anthropic
CEO
Dario
Amodei
wrote
to
the
governor
after
the
bill
was
amended,
listing
his
perceived
pros
and
cons
and
saying,
“...the
new
SB
1047
is
substantially
improved,
to
the
point
where
we
believe
its
benefits
likely
outweigh
its
costs.”
The
Chamber
of
Progress,
a
coalition
that
represents
Amazon,
Meta,
and
Google,
similarly
warned
the
law
would
“hamstring
innovation.”
Meta
public
affairs
manager
Jamie
Radice
emailed
Meta’s
statement
on
the
veto
to
The
Verge:
“We
are
pleased
that
Governor
Newsom
vetoed
SB1047.
This
bill
would
have
stifled
AI
innovation,
hurt
business
growth
and
job
creation,
and
broken
the
state’s
long
tradition
of
fostering
open-source
development.
We
support
responsible
AI
regulations
and
remain
committed
to
partnering
with
lawmakers
to
promote
better
approaches.”
The
bill’s
opponents
have
included
former
House
Speaker
Nancy
Pelosi,
San
Francisco
Mayor
London
Breed,
and eight congressional
Democrats
from
California.
On
the
other
side,
vocal
supporters
have
included
Elon
Musk,
prominent
Hollywood
names
like
Mark
Hamill,
Alyssa
Milano,
Shonda
Rhimes,
and
J.J.
Abrams,
and
unions
including
SAG-AFTRA
and
SEIU.
The
federal
government
is
also
looking
into
ways
it
could
regulate
AI.
In
May,
the
Senate
proposed
a
$32
billion
roadmap
that
goes
over
several
areas
lawmakers
should
look
into,
including
the
impact
of
AI
on
elections,
national
security,
copyrighted
content,
and
more.
Original author: Emma Roth
Comments