Microsoft
is
launching
a
new
feature
called
“correction”
that
builds
on
the
company’s
efforts
to
combat
AI
inaccuracies.
Customers
using
Microsoft
Azure
to
power
their
AI
systems
can
now
use
the
capability
to
automatically
detect
and
rewrite
incorrect
content
in
AI
outputs.
The
correction
feature
is
available
in
preview
as
part
of
the
Azure
AI
Studio
—
a
suite
of
safety
tools
designed
to
detect
vulnerabilities,
find
“hallucinations,”
and
block
malicious
prompts.
Once
enabled,
the
correction
system
will
scan
and
identify
inaccuracies
in
AI
output
by
comparing
it
with
a
customer’s
source
material.
From
there,
it
will
highlight
the
mistake,
provide
information
about
why
it’s
incorrect,
and
rewrite
the
content
in
question
—
all
“before
the
user
is
able
to
see”
the
inaccuracy.
While
this
seems
like
a
helpful
way
to
address
the
nonsense
often
espoused
by
AI
models,
it
might
not
be
a
fully
reliable
solution.
Vertex
AI,
Google’s
cloud
platform
for
companies
developing
AI
systems,
has
a
feature
that
“grounds”
AI
models
by
checking
outputs
against
Google
Search,
a
company’s
own
data,
and
(soon)
third-party
datasets.
In
a
statement
to
TechCrunch,
a
Microsoft
spokesperson
said
the
“correction”
system
uses
“small
language
models
and
large
language
models
to
align
outputs
with
grounding
documents,”
which
means
it
isn’t
immune
to
making
errors,
either.
“It
is
important
to
note
that
groundedness
detection
does
not
solve
for
‘accuracy,’
but
helps
to
align
generative
AI
outputs
with
grounding
documents,”
Microsoft
told
TechCrunch.
(Originally posted by Emma Roth)
Comments