Tech big wigs: Hit the brakes on AI rollouts

More
than
1,100
technology
luminaries,
leaders
and
scientists
have
issued
a
warning
against
labs
performing
large-scale
experiments
with
artificial
intelligence
(AI)
more
powerful
than
ChatGPT,
saying
the
technology
poses
a
grave
threat
to
humanity.

[…]

Tech big wigs: Hit the brakes on AI rollouts

More
than
1,100
technology
luminaries,
leaders
and
scientists
have
issued
a
warning
against
labs
performing
large-scale
experiments
with
artificial
intelligence
(AI)
more
powerful
than
ChatGPT,
saying
the
technology
poses
a
grave
threat
to
humanity.

In
an

open
letter

published
by

The Future of Life Institute
,
a
nonprofit organization that
aims
is
to
reduce
global
catastrophic
and
existential
risks
to
humanity,
Apple
co-founder
Steve
Wozniak,
SpaceX
and
Tesla
CEO
Elon
Musk,
and
MIT
Future
of
Life
Institute
President
Max
Tegmark
joined
other
signatories
in
saying AI
poses
“profound
risks
to
society
and
humanity,
as
shown
by
extensive
research and
acknowledged
by
top
AI
labs.”

The
signatories
called
for
a
six-month
pause
on
the
rollout
of
the
training
of
AI
systems
more
powerful
than

GPT-4
,
which
is
the
large
language
model
(LLM)
powering
the
popular

ChatGPT

natural
language
processing
chatbot.
The
letter,
in
part,
depicted
a
dystopian
future
reminiscent
of
those
created
by
artificial
neural
networks
in
science
fiction
movies,
such
as

The
Terminator

and

The
Matrix
.
The
letter
pointedly
questions
whether
advanced
AI
could
lead
to
a
“loss
of
control
of
our
civilization.”

The
missive
also
warns
of
political
disruptions
“especially
to
democracy”
from
AI:
chatbots
acting
as
humans
could
flood
social
media
and
other
networks
with
propaganda
and
untruths.
And
it
warned
that
AI
could
“automate
away
all
the
jobs,
including
the
fulfilling
ones.”

The
group
called
on
civic
leaders

not
the
technology
community

to
take
charge
of
decisions
around
the
breadth
of
AI
deployments.

Policymakers
should
work
with
the
AI
community
to
dramatically
accelerate
development
of
robust
AI
governance
systems
that,
at
a
minimum,
include
new
AI
regulatory
authorities,
oversight,
and
tracking
of
highly
capable
AI
systems
and
large
pools
of
computational
capability.
The
letter
also
suggested
provenance
and
watermarking
systems
be
used
to
help
distinguish
real
from
synthetic
content
and
to
track
model
leaks,
along
with
a
robust
auditing
and
certification
ecosystem.

“Contemporary
AI
systems
are
now
becoming
human-competitive
at
general
tasks,”
the
letter
said.
“Should we
develop
nonhuman
minds
that
might
eventually
outnumber,
outsmart, obsolete
and
replace us? Should we
risk
loss
of
control
of
our
civilization?
Such
decisions
must
not
be
delegated
to
unelected
tech
leaders.”

(The
UK
government

today
published
a white
paper
 outlining
plans
to
regulate
general-purpose
AI,
saying
it
would
“avoid
heavy-handed
legislation
which
could
stifle
innovation,”
and
instead
rely
on
existing
laws.)

Avivah
Litan,
a
vice
president
and
distinguished
analyst
at
Gartner
Resaerch
said
The
Future
of
Life
Institute’s
letter
is
spot
on.

The
Future
of
Life
Institute
argued
that
AI
labs
are
locked
in
an
out-of-control
race
to
develop
and
deploy
“ever
more
powerful
digital
minds
that
no
one

not
even
their
creators

can
understand,
predict,
or
reliably
control.”

Signatories
included
scientists
at

DeepMind
Technologies
,
a
British
AI
research
lab
and
a
subsidiary
Google
parent
firm
Alphabet.
Google
recently
announced
Bard,
an
AI-based
conversational
chatbot
it
developed
using
the
LaMDA
family
of
LLMs.

LLMs
are
deep
learning
algorithms

computer
programs
for
natural language processing
— that
can
produce
human-like
responses
to
queries
.
The
generative
AI
technology
can
also
produce
computer
code,
images,
video
and
sound.

Microsoft,
which
has
invested

more
than
$10
billion in
ChatGPT
and
GPT-4
creator
OpenAI
,
said
it
had
no
comment
at
this
time.
OpenAI
and
Google
also
did
not
immediately
respond
to
a
request
for
comment.

Andrzej
Arendt,
CEO
of
IT
consultancy
Cyber
Geeks,
said
while
generative
AI
tools
are
not
yet
able
to
deliver
the
highest
quality
software
as
a
final
product
on
their
own,
“their
assistance
in
generating
pieces
of
code,
system
configurations
or
unit
tests
can
significantly
speed
up
the
programmer’s
work.

“Will
it
make
the
developers
redundant?
Not
necessarily

partly
because
the
results
served
by
such
tools
cannot
be
used
without
question;
programmer
verification
is
necessary,”
Arendt
continued.
“In
fact,
changes
in
working
methods
have
accompanied
programmers
since
the
beginning
of
the
profession.
Developers’
work
will
simply
shift
to
interacting
with
AI
systems
to
some
extent.”

The
biggest
changes
will
come
with
the
introduction
of
full-scale
AI
systems,
Arendt
said,
which
can
be
compared
to
the
industrial
revolution
in
the
1800s
that
replaced
an
economy
based
on
crafts,
agriculture,
and
manufacturing.

“With
AI,
the
technological
leap
could
be
just
as
great,
if
not
greater.
At
present,
we
cannot
predict
all
the
consequences,”
he
said.

Vlad
Tushkanov,
lead
data
scientist
at
Moscow-based
cybersecurity
firm
Kaspersky,
said
integrating
LLM
algorithms
into
more
services
can
bring
new
threats.
In
fact,
LLM
technologists,
are
already
investigating
attacks,
such
as

prompt
injection
, that
can
be
used
against
LLMs
and
the
services
they
power.

“As
the
situation
changes
rapidly,
it
is
hard
to
estimate
what
will
happen
next
and
whether
these
LLM
peculiarities
turn
out
to
be
the
side
effect
of
their
immaturity
or
if
they
are
their
inherent
vulnerability,”
Tushkanov
said.
“However,
businesses
might
want
to
include
them
into
their
threat
models
when
planning
to
integrate
LLMs
into
consumer-facing
applications.”

That
said,
LLMs
and
AI
technologies
are
useful
and
already
automating
an
enormous
amounts
of
“grunt
work”
that
is
needed
but
neither
enjoyable
nor
interesting
for
people
to
do.
Chatbots,
for
example,
can
sift
through
millions
of
alerts,
emails,
probable
phishing
web
pages
and
potentially
malicious
executables
daily.

“This
volume
of
work
would
be
impossible
to
do
without
automation,”
Tushkanov
said.
“…Despite
all
the
advances
and
cutting-edge
technologies,
there
is
still
an
acute
shortage
of
cybersecurity
talent.
According
to
estimates,
the
industry
needs
millions
more
professionals,
and
in
this
very
creative
field,
we
cannot
waste
the
people
we
have
on
monotonous,
repetitive
tasks.”

Generative
AI
and
machine
learning
won’t
replace
all
IT
jobs,
including
people
who
fight
cybersecurity
threats,
Tushkanov
said.
Solutions
for
those
threats
are
being
developed
in
an
adversarial
environment,
where
cybercriminals
work
against
organizations
to
evade
detection.

“This
makes
it
very
difficult
to
automate
them,
because
cybercriminals
adapt
to
every
new
tool
and
approach,”
Tushkanov
said.
“Also,
with
cybersecurity
precision
and
quality
are
very
important,
and
right
now
large
language
models
are,
for
example,
prone
to
hallucinations
(as
our
tests
show,
cybersecurity
tasks
are
no
exception).” 

The
Future
of
Life
Institute
said
in
its
letter
that
with
guardrails,
humanity
can
enjoy
a
flourishing
future
with
AI. 

“Engineer
these
systems
for
the
clear
benefit
of
all,
and
give
society
a
chance
to
adapt,”
the
letter
said.
“Society
has
hit
pause
on
other
technologies
with
potentially
catastrophic
effects
on
society. We
can
do
so
here. Let’s
enjoy
a
long
AI
summer,
not
rush
unprepared
into
a
fall.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.