UK data regulator issues warning over generative AI data protection concerns

The
UK’s
data
regulator
has
issued
a
warning
to
tech
companies
about
protecting
personal
information
when
developing
and
deploying
large
language,

generative
AI
models.

[…]

UK data regulator issues warning over generative AI data protection concerns

The
UK’s
data
regulator
has
issued
a
warning
to
tech
companies
about
protecting
personal
information
when
developing
and
deploying
large
language,

generative
AI

models.

Less
than
a
week
after

Italy’s
data
privacy
regulator
banned

ChatGPT
over
alleged
privacy
violations,
the
Information
Commission’s
Office
(ICO)
published
a
blog
post
reminding
organizations
that
data
protection
laws
still
apply
when
the
personal
information
being
processed
comes
from
publicly
accessible
sources.

“Organisations
developing
or
using
generative
AI
should
be
considering
their
data
protection
obligations
from
the
outset,
taking
a
data
protection
by
design
and
by
default
approach,”
said
Stephen
Almond,
the
ICO’s
director
of
technology
and
innovation,
in
the

post
.

Almond
also
said
that,
for
organizations
processing
personal
data
for
the
purpose
of
developing
generative
AI,
there
are
various
questions
they
should
ask
themselves,
centering
on:
what
their
lawful
basis
for
processing
personal
data
is;
how
they
can
mitigate
security
risks;
and
how
they
will
respond
to
individual
rights
requests.

“There
really
can
be
no
excuse
for
getting
the
privacy
implications
of
generative
AI
wrong,”
Almond
said,
adding
that
ChatGPT
itself
recently
told
him
that
“generative
AI,
like
any
other
technology,
has
the
potential
to
pose
risks
to
data
privacy
if
not
used
responsibly.”

“We’ll
be
working
hard
to
make
sure
that
organisations
get
it
right,”
Almond
said.

The
ICO
and
the
Italian
data
regulator
are
not
the
only
ones
to
have
recently
raised
concerns
about
the
potential
risk
to
the
public
that
could
be
caused
by
generative
AI.

Last
month,
Apple
co-founder
Steve
Wozniak,
Twitter
owner
Elon
Musk,
and
a
group
of
1,100
technology
leaders
and
scientists

called
for
a
six-month
pause

in
developing
systems
more
powerful
than
OpenAI’s
newly
launched
GPT-4.

In
an
open
letter,
the
signatories
depicted
a
dystopian
future
and
questioned
whether
advanced
AI
could
lead
to
a
“loss
of
control
of
our
civilization,”
while
also
warning
of
the
potential
threat
to
democracy
if
chatbots
pretending
to
be
humans
could
flood
social
media
platforms
with
propaganda
and
“fake
news.”

The
group
also
voiced
a
concern
that
AI
could
“automate
away
all
the
jobs,
including
the
fulfilling
ones.”

Why
AI
regulation
is
a
challenge

When
it
comes
to
regulating
AI,
the
biggest
challenge
is
that
innovation
is
moving
so
fast
that
regulations
have
a
hard
time
keeping
up,
said
Frank
Buytendijk,
an
analyst
at
Gartner,
noting
that
if
regulations
are
too
specific,
they
lose
effectiveness
the
moment
technology
moves
on.

“If
they
are
too
high
level,
then
they
have
a
hard
time
being
effective
as
they
are
not
clear,”
he
said.

However,
Buytendijk
added
that
it’s
not
regulation
that
could
ultimately
stifle
AI
innovation
but
instead,
a
loss
of
trust
and
social
acceptance
because
of
too
many
costly
mistakes..

“AI
regulation,
demanding
models
to
be
checked
for
bias,
and
demanding
algorithms
to
be
more
transparent,
triggers
a
lot
of
innovation
too,
in
making
sure
bias
can
be
detected
and
transparency
and
explainability
can
be
achieved,”
Buytendijk
said.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.