Will ChatGPT start writing killer malware? | WeLiveSecurity

AI-pocalypse
soon?
As
stunning
as
ChatGPT’s
output
can
be,
should
we
also
expect
the
chatbot
to
spit
out
sophisticated
malware?

ChatGPT
didn’t
write
this
article

I
did.

Will ChatGPT start writing killer malware? | WeLiveSecurity

AI-pocalypse
soon?
As
stunning
as
ChatGPT’s
output
can
be,
should
we
also
expect
the
chatbot
to
spit
out
sophisticated
malware?

ChatGPT
didn’t
write
this
article

I
did.
Nor
did
I
ask
it
to
answer
the
question
from
the
title

I
will.
But
I
guess
that’s
just
what
ChatGPT
might
say.
Luckily,
there
are
some
grammar
errors
left
to
prove
I’m
not
a
robot.
But
that’s
just
the
kind
of
thing
ChatGPT
might
do
too
in
order
to
seem
real.

This
current
robot
hipster
tech
is
a
fancy
autoresponder
that
is
good
enough
to
produce
homework
answers,
research
papers,
legal
responses,
medical
diagnoses,
and
a
host
of
other
things
that
have
passed
the
“smell
test”
when
treated
as
if
they
are
the
work
of
human
actors.
But
will
it
add
meaningfully
to
the
hundreds
of
thousands
of
malware
samples
we
see
and
process
daily,
or
be
an
easily
spotted
fake?

In
a
machine-on-machine
duel
that
the
technorati
have
been
lusting
after
for
years,
ChatGPT
appears
a
little
“too
good”
not
to
be
seen
as
a
serious
contender
that
might
jam
up
the
opposing
machinery.
With
both
the
attacker
and
defender
using
the
latest
machine
learning
(ML)
models,
this
had
to
happen.

Except,
to
build
good
antimalware
machinery,
it’s
not
just
robot-on-robot.
Some
human
intervention
has
always
been
required:
we
determined
this
many
years
ago,
to
the
chagrin
of
the
ML-only
purveyors
who
enter
the
marketing
fray

all
while
insisting
on
muddying
the
waters
by

referring
to
their
ML-only
products
as
using
“AI”
.

While
ML
models
have
been
used
for
coarse
triage
front
ends
through
to
more
complex
analysis,
they
fall
short
of
being
a
big
red
“kill
malware”
button.
Malware
just
isn’t
that
simple.

But
to
be
sure,
I’ve
tapped
some
of
ESET’s
own
ML
gurus
and
asked:


Q.
How
good
will
ChatGPT-generated
malware
be,
or
is
that
even
possible?

A.
We
are
not
really
close
to
“full
AI-generated
malware”,
though
ChatGPT
is
quite
good
at
code
suggestion,
generating
code
examples
and
snippets,
debugging,
and
optimizing
code,
and
even
automating
documentation.


Q.
What
about
more
advanced
features?

A.
We
don’t
know
how
good
it
is
at
obfuscation.
Some
of
the
examples

relate

to
scripting
languages
like
Python.
But
we
saw
ChatGPT
“reversing”
the
meaning
of
disassembled
code

connected
to
IDA
Pro
,
which
is
interesting.
All
in
all,
it’s
probably
a
handy
tool
for
assisting
a
programmer,
and
maybe
that’s
a
first
step
toward
building
more
full-featured
malware,
but
not
yet.


Q.
How
good
is
it
right
now?

A.
ChatGPT
is
very
impressive,
considering
that
it’s
a
Large
Language
Model,
and
its
capabilities
surprise

even
the
creators

of
such
models.
However,
currently
it’s
very
shallow,
makes
errors,
creates
answers
that
are
closer
to
hallucinations
(i.e.,
fabricated
answers),
and
isn’t
really
reliable
for
anything
serious. But
it
seems
to
be
gaining
ground
quickly,
judging
by
the
swarm
of
techies
poking
their
toes
in
the
water.


Q.
What
can
it
do
right
now

what
is
the
“low-hanging
fruit”
for
the
platform?

A.
For
now,
we
see
three
likely
areas
of
malicious
adoption
and
use:


  • Out-phishing
    the
    phishers

If
you
think
phishing
looked
convincing
in
the
past,
just
wait.
From
probing
more
data
sources
and
mashing
them
up
seamlessly
to
vomit
specifically
crafted
emails
that
will
be
very
difficult
to
detect
based
on
their
content,
and
success
rates
promise
to
be
better
at
getting
clicks.
And
you
won’t
be
able
to
hastily
cull
them
due
to
sloppy
language
mistakes;
their
knowledge
of
your
native
language
is
probably
better
than
yours.
Since
a
wide
swath
of
the
nastiest
attacks
start
with
someone
clicking
on
a
link,
expect
the
related
impact
to
supersize.


  • Ransom
    negotiation
    automation

Smooth-talking
ransomware
operators
are
probably
rare,
but
adding
a
little
ChatGPT
shine
to
the
communications
could
lower
the
workload
of
attackers
seeming
legit
during
negotiations.
This
will
also
mean
fewer
mistakes
that
might
allow
defenders
to
home
in
on
the
true
identities
and
locations
of
the
operators.


  • Phone
    scams
    get
    better

With
natural
language
generation
getting
more,
well,
natural,
nasty
scammers
will
sound
like
they
are
from
your
area
and
have
your
best
interests
in
mind.
This
is
one
of
the
first
onboarding
steps
in
a
confidence
scam:
sounding
more
confident
by
sounding
like
they’re
one
of
your
people.

If
all
this
sounds
like
it
may
be
way
in
the
future,
don’t
bet
on
it.
It
won’t
all
happen
all
at
once,
but
criminals
are
about
to
get
a
lot
better.
We’ll
see
if
the
defense
is
up
to
the
challenge.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.