When will AI be fully integrated into cyber security?

ChatGPT,
a
machine
learning
(ML)-powered
chatbot,
is
rapidly
growing
across
all
sectors.
The
app’s
developer,
OpenAI,
reported
that
it

When will AI be fully integrated into cyber security?

ChatGPT,
a
machine
learning
(ML)-powered
chatbot,
is
rapidly
growing
across
all
sectors.
The
app’s
developer,
OpenAI,
reported
that
it

gained
one
million
users
in
just
five
days
.
The
app
has
now

been
visited
over
two
billion
times
,
according
to
research
by
Similarweb.
This
being
said,
concerns
have
been
raised
about
the
use
of
the
intelligent
chatbot,
with
Italy’s
data
privacy
agency
even
going
so
far
as
to
temporarily

ban
the
use
of
the
app
in
the
country

over
concerns
that
it
violates
GDPR
law.

Due
to
the
app’s
impact
on
the
sector,

Cyber
Security
Hub
’s
Advisory
Board
members
discussed
ChatGPT’s
impact
on
the
industry
in
its
March
meeting.


Cyber
Security
Hub’s

Advisory
Board
is
a
group
of
experts
in
IT
security
and
technology
that
meets
every
two
months
to
discuss
the
most
important
issues
and
developments
in
cyber
security.


How
AI
enhances
cyber
security
strategy

With
research
by
Capgemini
finding
that
the
majority
(69
percent)
of
enterprise
executives
said
that

artificial
intelligence
(AI)
was
necessary
to
respond
to
cyber
security
threats

three
years
ago,
the
complete
integration
AI
within
cyber
security
strategy
seems
inevitable.

When
discussing
whether
they
use
AI
in
their
cyber
security
strategy,
however,
one
member
was
quick
to
point
out
that
common
misconceptions
about
what
AI
is
can
muddy
the
waters
when
discussing
its
use.

“AI
is
a
huge
buzzword
at
the
moment
but
what
people
are
talking
about
is
not
true
AI
as
such,”
they
explained,
“We
use
a
lot
of
ML
as
we
need
to
understand
all
user
behavior
analytics
from
ingress
points
through
to
instruction.
AI
is
instruction-based
and
ML
is
behavior-based.”

Another
member
agreed
that
they
do
not
use
as
much
AI
as
the
technology
is
still
in
its
infancy,
however
they
do
use
machine
learning
as
it
can
use
data
to
make
predictions
in
a
fraction
of
the
time
it
would
take
a
human
to.
They
also
noted,
however,
that
true
AI
may
open
up
more
risks
to
a
company’s
network.
This
potential
risk
has
led
to
the
member
“follow[ing]
trends
more
on
the
conservative
side,
leveraging
people
and
using
technology
as
a
blend
to
get
the
best
results”.


The
future
of
artificial
intelligence
and
machine
learning

Cyber
Security
Hub
research
has
found
that
almost
one
in
five
(19
percent)
of
cyber
security
professionals
say
they
are

investing
in
cyber
security
controls
with
integrated
AI
and
automation
.

When
considering
how
and
when
AI
and
ML
will
be
integrated
into
most
if
not
all
cyber
security
solutions,
a
member
said
this
will
happen
once
those
in
the
cyber
security
industry
realize
that
they
cannot
change
human
behavior.

“You
can
positively
or
negatively
reinforce
behaviors.
It
is
great
to
automate
and
great
to
use
AI
but
it
also
needs
the
human,
we
should
not
forget
that
we
cannot
have
a
tool
for
everything,”
they
shared.

Another
member
agreed,
saying
that
AI
and
ML
will
continue
to
progress
in
the
workforce
as
cyber
security
itself
has
a
lack
of
people
who
wish
to
get
involved
and
gain
experience,
choosing
to
rely
on
technology
instead.


“You
can
positively
or
negatively
reinforce
behaviors.
It
is
great
to
automate
and
great
to
use
AI
but
it
also
needs
the
human,
we
should
not
forget
that
we
cannot
have
a
tool
for
everything.”

“Innovation
is
becoming
so
critical
in
all
areas
that
we
need
to
keep
pushing
the
needle
forward.
It
is
exciting
but
scary,
because
you
can
have
the
machine
do
things
that
usually
need
multiple
people
with
the
click
of
a
mouse.
What
you
can
get
from
ChatGPT
used
to
take
hours
or
days,
but
people
always
have
to
be
part
of
the
process
as
long
as
there
are
people
to
do
it.
I
don’t
know
if
there
are
enough
people
coming
in
[to
the
cyber
security
space].”

Based
on
this,
members
agreed
that
behavioral
scientists
will
be
involved
in
the
expansion
of
machine
learning
especially,
as
they
will
be
able
to
drive
machine
learning
algorithms
and
allow
them
to
anticipate
decision
trees
to
quickly
make
decisions
or
provide
several
avenues
for
the
decision.

With
this
being
said,
one
member
clarified
that
AI
and
machine
learning
will
never
truly
overtake
humans,
even
if
it
does
manage
to
catch
up
with
the
speed
of
human
thought:
“AI
and
ML
will
supersede
but
as
soon
as
processing
power
catches
up
to
brain
power
it
will
take
over.
 It
still
needs
the
human,
however.
Social
media
and
cyber
warfare
will
drive
the
AI
and
ML
evolution
forward”.


Why
cyber
security
professionals
are
concerned
about
ChatGPT

Research
by
Blackberry
Security
has
found
that
cyber
security
leaders
are
concerned
about
ChatGPT’s
use
by
malicious
actors,
with
73
percent
either

‘fairly’
or
‘very’
concerned
about
the
AI
tool’s
potential
to
be
used
as
a
threat
to
cyber
security
.

When
discussing
this
concern
at
the
meeting,
one
Advisory
Board
member
described
that
they
had
already
noticed
ChatGPT
being
used
to
make
cyber
attacks
more
sophisticated
within
their
company.
 

They
explained
that
they
see
about
37,000
phishing
campaigns
weekly
and
have
recently
noticed
that
malicious
actors
have
gone
from
using
broken
or
misspelled
English
to
“prim
and
proper”
language.
The
member
suspected
that
they
have
started
using
ChatGPT
to
craft
a
style
that
helps
them
with
their
English.
 

The
member
also
noted
that
ChatGPT
is
also
helping
malicious
actors
to
understand
the
psychology
of
the
recipient
and
better
put
them
under
duress
to
increase
the
effectiveness
of
their
phishing
attempts.
To
combat
this,
people
have
been
developing
anti-GPT
solutions,
including
one
that
can
tell
whether
content
has
been
typed
by
a
human
or
systematic
programming. 


“AI
and
ML
will
supersede
but
as
soon
as
processing
power
catches
up
to
brain
power
it
will
take
over.
 It
still
needs
the
human,
however.
Social
media
and
cyber
warfare
will
drive
the
AI
and
ML
evolution
forward”.

Another
member
dubbed
ChatGPT
as
“cool
but
scary”
because
of
its
potential
to
be
used
by
bad
actors. 

“Phishing
is
the
number
one
attack
surface
and
[malicious
actors]
will
use
it
to
make
scams
more
realistic.
It
will
be
the
voice
of
spear
phishing
and
targeted
spear
phishing
will
be
enhanced
due
to
ChatGPT.
It
is
just
another
way
to
increase
their
success
with
their
attacks. 

“When
you
talk
about
malware
or
ransomware,
bad
actors
[may
use]
third
parties
as
ransomware,
[but]
now
we
may
see
them
using
ChatGPT
and
eliminating
the
third
party.
There
is
lots
of
good
but
there
is
always
something
bad
to
do
with
it,”
they
explained.

Later
on
in
the
discussion,
a
member
noted
that
ChatGPT
may
also
cause
an
issue
within
cyber
security
teams,
as
if
their
development
team
is
using
ChatGPT
to
generate
code
and
then
using
this
within
their
platform,
the
code
may
be
unsafe
and
open
their
network
up
to
a
number
of
threats.
They
said
that
this
problem
may
be
exacerbated
if
companies
are
constantly
hiring
new
developers,
as
they
may
feel
reliant
on
ChatGPT
to
complete
their
work
quickly.

The
member
explained
that
employees
may
take
code
from
ChatGPT
without
reviewing
it
first
as
it
is
human
nature
to
trust
sources,
even
if
these
sources
come
via
the
internet.
They
noted
that
those
in
cyber
security
must
move
quickly
to
stay
on
top
of
technological
changes,
like
the
development
of
AI,
as
well
as
to
mitigate
the
aspects
of
human
behavior
and
psychology
that
are
a
threat
to
cyber
security. 

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.