Miles
Brundage,
OpenAI’s
senior
adviser
for
the
readiness
of
AGI
(aka
human-level
artificial
intelligence),
delivered
a
stark
warning
as
he
announced
his
departure
on
Wednesday:
no
one
is
prepared
for
artificial
general
intelligence,
including
OpenAI
itself.
“Neither
OpenAI
nor
any
other
frontier
lab
is
ready
[for
AGI],
and
the
world
is
also
not
ready,”
wrote
Brundage,
who
spent
six
years
helping
to
shape
the
company’s
AI
safety
initiatives.
“To
be
clear,
I
don’t
think
this
is
a
controversial
statement
among
OpenAI’s
leadership,
and
notably,
that’s
a
different
question
from
whether
the
company
and
the
world
are
on
track
to
be
ready
at
the
relevant
time.”
His
exit
marks
the
latest
in
a
series
of
high-profile
departures
from
OpenAI’s
safety
teams.
Jan
Leike,
a
prominent
researcher,
left
after
claiming
that
“safety
culture
and
processes
have
taken
a
backseat
to
shiny
products.”
Cofounder
Ilya
Sutskever
also
departed
to
launch
his
own
AI
startup
focused
on
safe
AGI
development.
The
dissolution
of
Brundage’s
“AGI
Readiness”
team,
coming
just
months
after
the
company
disbanded
its
“Superalignment”
team
dedicated
to
long-term
AI
risk
mitigation,
highlights
mounting
tensions
between
OpenAI’s
original
mission
and
its
commercial
ambitions.
The
company
reportedly
faces
pressure
to
transition
from
a
nonprofit
to
a
for-profit
public
benefit
corporation
within
two
years
—
or
risk
returning
funds
from
its
recent
$6.6
billion
investment
round.
This
shift
toward
commercialization
has
long
concerned
Brundage,
who
expressed
reservations
back
in
2019
when
OpenAI
first
established
its
for-profit
division.
In
explaining
his
departure,
Brundage
cited
increasing
constraints
on
his
research
and
publication
freedom
at
the
high-profile
company.
He
emphasized
the
need
for
independent
voices
in
AI
policy
discussions,
free
from
industry
biases
and
conflicts
of
interest.
Having
advised
OpenAI’s
leadership
on
internal
preparedness,
he
believes
he
can
now
make
a
greater
impact
on
global
AI
governance
from
outside
of
the
organization.
This
departure
may
also
reflect
a
deeper
cultural
divide
within
OpenAI.
Many
researchers
joined
to
advance
AI
research
and
now
find
themselves
in
an
increasingly
product-driven
environment.
Internal
resource
allocation
has
become
a
flashpoint
—
reports
indicate
that
Leike’s
team
was
denied
computing
power
for
safety
research
before
its
eventual
dissolution.
Despite
these
frictions,
Brundage
noted
that
OpenAI
has
offered
to
support
his
future
work
with
funding,
API
credits,
and
early
model
access,
with
no
strings
attached.
(Originally posted by Kylie Robison)
Comments