DoNotPay,
a
company
that
claimed
to
offer
the
“world’s
first
robot
lawyer,”
has
agreed
to
a
$193,000
settlement
with
the
Federal
Trade
Commission,
the
agency
announced
on
Tuesday.
The
move
is
a
part
of
Operation
AI
Comply,
a
new
law
enforcement
effort
from
the
FTC
to
crack
down
on
companies
that
use
AI
services
to
deceive
or
defraud
customers.
According
to
the
FTC
complaint,
DoNotPay
said
it
would
“replace
the
$200-billion-dollar
legal
industry
with
artificial
intelligence”
and
that
its
“robot
lawyers”
could
substitute
for
the
expertise
and
output
of
a
human
lawyer
in
generating
legal
documents.
However,
the
FTC
says
the
company
made
the
claim
without
any
testing
to
back
it
up.
In
fact,
the
complaint
says:
None
of
the
Service’s
technologies
has
been
trained
on
a
comprehensive
and
current
corpus
of
federal
and
state
laws,
regulations,
and
judicial
decisions
or
on
the
application
of
those
laws
to
fact
patterns.
DoNotPay
employees
have
not
tested
the
quality
and
accuracy
of
the
legal
documents
and
advice
generated
by
most
of
the
Service’s
law-related
features.
DoNotPay
has
not
employed
attorneys
and
has
not
retained
attorneys,
let
alone
attorneys
with
the
relevant
legal
expertise,
to
test
the
quality
and
accuracy
of
the
Service’s
law-related
features.
The
complaint
also
alleges
the
company
even
told
consumers
they
could
use
the
company’s
AI
service
to
sue
for
assault
without
hiring
a
human
and
that
it
could
check
small
business
websites
for
legal
violations
based
on
a
consumer’s
email
address
alone.
DoNotPay
claimed
this
would
save
businesses
$125,000
in
legal
fees,
but
the
FTC
says
the
service
was
not
effective.
The
FTC
says
that
DoNotPay
has
agreed
to
pay
$193,000
to
settle
the
charges
against
it
and
to
warn
consumers
who
subscribed
between
2021
and
2023
about
the
limitations
of
its
law-related
offerings.
DoNotPay
will
also
not
be
allowed
to
claim
it
can
replace
any
professional
service
without
providing
evidence.
The
FTC
also
announced
action
against
other
companies
that
have
used
AI
services
to
mislead
customers.
That
includes AI
“writing
assistant”
service
Rytr,
a
company
the
FTC
says
provides
subscribers
with
tools
to
create
AI-generated
fake
reviews.
The
move
against
Rytr
comes
a
little
over
a
month
after
the
FTC
Federal
Trade
Commission announced
a
final
rule
banning all
companies
from
creating
or
selling
fake
reviews,
including
AI-generated
ones.
It
will
soon
go
into
effect,
which
means
the
FTC
can
seek
a
maximum
of
up
to
$51,744
per
violation
against
companies.
The
FTC
also
filed
a
lawsuit
against
Ascend
Ecom,
which
allegedly
defrauded
consumers
of
at
least
$25
million.
Ascend
promised
customers
that
by
using
its
AI-powered
tools,
they
could
start
online
stores
on
e-commerce
platforms
like
Amazon
that’d
produce
a
five-figure
monthly
income.
“Using
AI
tools
to
trick,
mislead,
or
defraud
people
is
illegal,”
said
FTC
Chair
Lina
M.
Khan.
“The
FTC’s
enforcement
actions
make
clear
that
there
is
no
AI
exemption
from
the
laws
on
the
books.
By
cracking
down
on
unfair
or
deceptive
practices
in
these
markets,
FTC
is
ensuring
that
honest
businesses
and
innovators
can
get
a
fair
shot
and
consumers
are
being
protected.”
(Originally posted by Sheena Vasani)
Comments