People
have
been
trying
to
talk
to
computers
for
almost
as
long
as
they’ve
been
building
computers.
For
decades,
many
in
tech
have
been
convinced
that
this
was
the
trick:
if
we
could
figure
out
a
way
to
talk
to
our
computers
the
way
we
talk
to
other
people,
and
for
computers
to
talk
back
the
same
way,
it
would
make
those
computers
easier
to
understand
and
operate,
more
accessible
to
everyone,
and
just
more
fun
to
use.
ChatGPT
and
the
current
revolution
in
AI
chatbots
is
really
only
the
latest
version
of
this
trend,
which
extends
all
the
way
back
to
the
1960s.
That’s
when
Joseph
Weizenbaum,
a
professor
at
MIT,
built
a
chatbot
named
Eliza.
Weizenbaum
wrote
in
an
academic
journal
in
1966
that
Eliza
“makes
certain
kinds
of
natural
language
conversation
between
man
and
computer
possible.”
He
set
up
the
bot
to
act
as
a
therapist,
a
vessel
into
which
people
could
pour
their
problems
and
thoughts.
The
tech
behind
Eliza
was
incredibly
primitive: users
typed
into
a
text
field,
and
the
bot
selected
from
a
bunch
of
predefined
responses
based
on
the
keywords
in
your
question.
If
it
didn’t
know
what
to
say,
it
would
just
repeat
your
words
back
— you’d
say
“My
father
is
the
problem”
and
it
would
respond
“Your
father
is
the
problem.”
But
it
worked!
Weizenbaum
wrote
in
another
paper
a
year
later
that
it
had
been
hard
to
convince
people
that
there
wasn’t
a
human
on
the
other
side
of
their
conversation.
What
Eliza
showed,
and
what
other
developers
and
engineers
have
spent
the
next
six
decades
working
on,
is
that
we
treat
our
devices
differently
when
we
think
of
them
as
animate,
human-like
objects.
And
we
are
remarkably
willing
to
treat
our
devices
that
way.
(Have
you
ever
felt
bad
for
your
robot
vacuum
as
it
bonks
its
way
around
your
living
room,
or
thanked
Alexa
for
doing
something
for
you?)
It’s
human
nature
to
anthropomorphize
objects,
to
imbue
them
with
human
qualities
even
when
they
don’t
have
any.
And
when
we
do
that,
we’re
kinder
to
those
objects;
we’re
more
patient
and
collaborative
with
them;
we
enjoy
using
them
more.
We
treat
our
devices
differently
when
we
think
of
them
as
animate,
human-like
objects
Examples
of
what
this
could
look
like
are
everywhere
in
pop
culture.
The
Star
Trek
computer
is
a
classic
inspiration
for
Silicon
Valley
types
— “Tea.
Earl
Grey.
Hot.”
— as
is
Scarlett
Johansson’s
ambient
AI
in
Her.
HAL
9000
in
2001:
A
Space
Odyssey
is
both
an
inspiration
and
a
cautionary
tale,
as
is
WOPR
from
WarGames.
These
are
computers
that
think,
that
talk,
that
understand.
The
only
problem
with
all
these
human-like
computers?
They
are
remarkably
hard
to
pull
off.
A
bot
like
Eliza
could
generate
somewhat
convincing
conversation,
but
that
only
gets
you
so
far.
Most
of
the
time,
your
computer’s
job
is
to
do
stuff,
and
these
chatbots
have
never
been
very
good
at
doing
stuff.
There
have
been
people
working
on
that,
too:
a
group
at
Xerox
PARC
in
the
1970s
built
a
chatbot
you
could
use
to
book
plane
tickets,
but
it
was
finicky
and
slow
and
wildly
expensive
to
run.
There
have
been
countless
attempts
since
to
do
the
same
thing.
Over
the
years,
there
have
been
many
versions
of
these
tools.
There
were
other
early
chatbots,
like
Dr.
Sbaitso,
Parry,
and
Alice.
In
the
early
aughts,
there
was
SmarterChild,
the
irreverent
AIM
bot
that
introduced
so
many
teens
and
tweens
to
the
idea
that
a
computer
could
talk
back.
There
was
the
voice-assistant
era,
in
which
everybody
thought
Siri,
Alexa,
Cortana,
Bixby,
and
countless
other
tools
would
change
the
way
we
used
our
devices
and
got
things
done.
With
every
generation,
we
got
a
little
closer
to
a
computer
that
could
both
talk
the
talk
and
walk
the
walk.
But
nothing
ever
got
there.
When
was
the
last
time
you
asked
Google
Assistant
to
book
you
a
flight?
Now,
we’re
at
the
beginning
of
a
new
era
in
chatbots,
one
that
many
in
the
industry
think
might
actually
get
the
job
done.
Tools
like
ChatGPT
and
Google
Gemini,
and
the
underlying
language
models
that
power
them,
are
far
more
capable
of
both
understanding
you
and
getting
stuff
done
on
your
behalf.
Microsoft
is
betting
that
Copilot
will
be
your
AI
companion
all
day
every
day
at
work;
Google’s
putting
Gemini
in
the
same
position.
These
tools
aren’t
perfect,
or
even
close
—
they
make
things
up,
they
misunderstand,
they
crash,
they
occasionally
go
completely
haywire
— but
they’re
the
closest
thing
we’ve
seen
yet
to
a
conversational
computer.
You
talk
to
it
like
you’d
talk
to
a
person,
and
it
talks
back.
Is
it
cool
to
think
that
a
computer
can
fit
into
your
life
the
way
an
assistant
or
friend
might,
or
is
it
horrifying?
The
rise
of
these
powerful
bots
raises
lots
of
questions.
Is
it
cool
to
think
that
a
computer
can
fit
into
your
life
the
way
an
assistant
or
friend
might,
or
is
it
horrifying?
Is
there
something
fundamentally
wrong
with
the
idea
of
having
an
AI
companion
like
the
ones
from
Meta,
Replika,
or
Character.AI,
or
is
there
something
beautiful
about
enabling
that
kind
of
relationship?
How
much
better
do
these
bots
need
to
get
before
we
can
really,
truly
rely
on
them?
Are
they
ever
going
to
get
that
good?
But
most
of
all,
we
get
to
finally
answer
the
question
we’ve
been
asking
since
the
‘60s:
is
this
the
way
computers
should
work?
Many
people
have
believed
so
for
years,
many
others
have
said
they’re
wrong
and
that
training
computers
to
work
like
humans
would
make
them
less
efficient
and
more
annoying.
But
we
never
got
to
find
out,
because
the
bots
were
never
good
enough
to
really
pit
against
all
the
other
ways
we
interact
with
our
devices,
our
information,
and
each
other.
Now
they’re
close,
or
at
least
much
closer.
And
so
we
get
to
find
out
for
real
whether
Joseph
Weizenbaum
was
right
all
those
years
ago
—
that
conversation
is
the
future
of
computation.
The
chatbot
has
been
the
future
of
computers
for
almost
as
long
as
there
have
been
computers,
and
now
its
time
has
come.
Original author: David Pierce
Comments