Humans are still better at creating phishing emails than AI — for now

AI-generated
phishing
emails,
including
ones
created
by
ChatGPT,
present
a
potential
new
threat
for
security
professionals,
says
Hoxhunt.

Humans are still better at creating phishing emails than AI — for now

AI-generated
phishing
emails,
including
ones
created
by
ChatGPT,
present
a
potential
new
threat
for
security
professionals,
says
Hoxhunt.

An AI generated phishing email.
Image:
Gstudio/Adobe
Stock

Amid
all
of
the
buzz
around
ChatGPT
and
other
artificial
intelligence
apps,
cybercriminals
have
already
started
using
AI
to
generate
phishing
emails.
For
now,
human
cybercriminals
are
still
more
accomplished
at
devising
successful
phishing
attacks,
but
the
gap
is
closing,
according
to
security
trainer

Hoxhunt’s
new
report

released
Wednesday.

Phishing
campaigns
created
by
ChatGPT
vs.
humans

Hoxhunt
compared
phishing
campaigns
generated
by
ChatGPT
versus
those
created
by
human
beings
to
determine
which
stood
a
better
chance
of
hoodwinking
an
unsuspecting
victim.

To
conduct
this
experiment,
the
company
sent
53,127
users
across
100
countries
phishing
simulations
designed
either
by
human
social
engineers
or
by
ChatGPT.
The
users
received
the
phishing
simulation
in
their
inboxes
as
they’d
receive
any
type
of
email.
The
test
was
set
up
to
trigger
three
possible
responses:


  1. Success:

    The
    user
    successfully
    reports
    the
    phishing
    simulation
    as
    malicious
    via
    the
    Hoxhunt
    threat
    reporting
    button.

  2. Miss:

    The
    user
    doesn’t
    interact
    with
    the
    phishing
    simulation.

  3. Failure:

    The
    user
    takes
    the
    bait
    and
    clicks
    on
    the
    malicious
    link
    in
    the
    email.

The
results
of
the
phishing
simulation
led
by
Hoxhunt

In
the
end,
human-generated
phishing
mails
caught
more
victims
than
did
those
created
by
ChatGPT.
Specifically,
the
rate
in
which
users
fell
for
the
human-generated
messages
was
4.2%,
while
the
rate
for
the
AI-generated
ones
was
2.9%.
That
means
the
human
social
engineers
outperformed
ChatGPT
by
around
69%.

One
positive
outcome
from
the
study
is
that
security
training
can
prove
effective
at
thwarting
phishing
attacks.
Users
with
a
greater
awareness
of
security
were
far
more
likely
to
resist
the
temptation
of
engaging
with
phishing
emails,
whether
they
were
generated
by
humans
or
by
AI.
The
percentages
of
people
who
clicked
on
a
malicious
link
in
a
message
dropped
from
more
than
14%
among
less-trained
users
to
between
2%
and
4%
among
those
with
greater
training.


SEE:



Security
awareness
and
training
policy


(TechRepublic
Premium)

The
results
also
varied
by
country:


  • U.S.:

    5.9%
    of
    surveyed
    users
    were
    fooled
    by
    human-generated
    emails,
    while
    4.5%
    were
    fooled
    by
    AI-generated
    messages.

  • Germany:

    2.3%
    were
    tricked
    by
    humans,
    while
    1.9%
    were
    tricked
    by
    AI.

  • Sweden:

    6.1%
    were
    deceived
    by
    humans,
    with
    4.1%
    deceived
    by
    AI.

Current
cybersecurity
defenses
can
still
cover
AI
phishing
attacks

Though
phishing
emails
created
by
humans
were
more
convincing
than
those
from
AI,
this
outcome
is
fluid,
especially
as
ChatGPT
and
other
AI
models
improve.
The
test
itself
was
performed
before
the
release
of
ChatGPT
4,
which
promises
to
be
savvier
than
its
predecessor.
AI
tools
will
certainly
evolve
and
pose
a
greater
threat
to
organizations
from
cybercriminals
who
use
them
for
their
own
malicious
purposes.

On
the
plus
side,
protecting
your
organization
from
phishing
emails
and
other
threats
requires
the
same
defenses
and
coordination
whether
the
attacks
are
created
by
humans
or
by
AI.

“ChatGPT
allows
criminals
to
launch
perfectly
worded
phishing
campaigns
at
scale,
and
while
that
removes
a
key
indicator
of
a
phishing
attack

bad
grammar

other
indicators
are
readily
observable
to
the
trained
eye,”
said
Hoxhunt
CEO
and
co-founder
Mika
Aalto.
“Within
your
holistic
cybersecurity
strategy,
be
sure
to
focus
on
your
people
and
their
email
behavior
because
that
is
what
our
adversaries
are
doing
with
their
new
AI
tools.

“Embed
security
as
a
shared
responsibility
throughout
the
organization
with
ongoing
training
that
enables
users
to
spot
suspicious
messages
and
rewards
them
for
reporting
threats
until
human
threat
detection
becomes
a
habit.”

Security
tips
or
IT
and
users

Toward
that
end,
Aalto
offers
the
following
tips.

For
IT
and
security

  • Require
    two-factor
    authentication
    or
    multi-factor
    authentication
    for
    all
    employees
    who
    access
    sensitive
    data.
  • Give
    all
    employees
    the
    skills
    and
    confidence
    to
    report
    a
    suspicious
    email;
    such
    a
    process
    should
    be
    seamless.
  • Provide
    security
    teams
    with
    the
    resources
    needed
    to
    analyze
    and
    address
    threat
    reports
    from
    employees.

For
users

  • Hover
    over
    any
    link
    in
    an
    email
    before
    clicking
    on
    it.
    If
    the
    link
    appears
    out
    of
    place
    or
    irrelevant
    to
    the
    message,
    report
    the
    email
    as
    suspicious
    to
    IT
    support
    or
    help
    desk
    team.
  • Scrutinize
    the
    sender
    field
    to
    make
    sure
    the
    email
    address
    contains
    a
    legitimate
    business
    domain.
    If
    the
    address
    points
    to
    Gmail,
    Hotmail
    or
    other
    free
    service,
    the
    message
    is
    likely
    a
    phishing
    email.
  • Confirm
    a
    suspicious
    email
    with
    the
    sender
    before
    acting
    on
    it.
    Use
    a
    method
    other
    than
    email
    to
    contact
    the
    sender
    about
    the
    message.
  • Think
    before
    you
    click.
    Socially
    engineered
    phishing
    attacks
    try
    to
    create
    a
    false
    sense
    of
    urgency,
    prompting
    the
    recipient
    to
    click
    on
    a
    link
    or
    engage
    with
    the
    message
    as
    quickly
    as
    possible.
  • Pay
    attention
    to
    the
    tone
    and
    voice
    of
    an
    email.
    For
    now,
    phishing
    emails
    generated
    by
    AI
    are
    written
    in
    a
    formal
    and
    stilted
    manner.


Read
next:



As
a
cybersecurity
blade,
ChatGPT
can
cut
both
ways


(TechRepublic)

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.