Darktrace introduces new models to address AI concerns

Darktrace
introduces
new
risk
and
compliance
models
to
help
CISOs
address
the
increasing
risk
of
IP
loss
and
data
leakage.

Darktrace introduces new models to address AI concerns

Darktrace
introduces
new
risk
and
compliance
models
to
help
CISOs
address
the
increasing
risk
of
IP
loss
and
data
leakage. 

In
response
to
the
growing
use
of
generative
AI
tools,
Darktrace
has
announced
the
launch
of
new
risk
and
compliance
models
to
help
its
8,400
customers
worldwide
address
the
increasing
risk
of
IP
loss
and
data
leakage. 

These
new
risk
and
compliance
models
for
Darktrace
DETEC 
and
RESPON
make
it
easier
for
customers
to
put
guardrails
in
place
to
monitor
and,
when
necessary,
respond
to
activity
and
connections
to
generative
AI
and
large
language
model
(LLM)
tools.

This
comes
as
Darktrace’s
AI
observed
74%
of
active
customer
deployments
have
employees
using
generative
AI
tools
in
the
workplace. 

In
one
instance,
in
May
2023,
Darktrace
detected
and
prevented
an
upload
of
over
1GB
of
data
to
a
generative
AI
tool
at
one
of
its
customers.

Darktrace
believes
that
CISOs
must
balance
the
desire
to
embrace
these
innovations
to
boost
productivity
while
managing
risk. 

Government
agencies,
including
the
UK’s
National
Cyber
Security
Centre,
have
already
issued
guidance
about
managing
risk
when
using
generative
AI
tools
and
other
LLMs
in
the
workplace.

In
addition,
regulators
in
various
jurisdictions
(including
the
UK,
EU,
and
US)
and
multiple
sectors
are
expected
to
guide
companies
in
making
the
most
of
AI
without
exacerbating
its
potential
dangers.

Allan
Jacobson,
Vice
President
and
Head
of
Information
Technology,
Orion
Office
REIT,
says:
“Since
generative
AI
tools
like
ChatGPT
have
gone
mainstream,
our
company
is
increasingly
aware
of
how
companies
are
being
impacted.”

“First
and
foremost,
we
are
focused
on
the
attack
vector
and
how
well
prepared
we
are
to
respond
to
potential
threats.
Equally
as
important
is
data
privacy,
and
we
are
hearing
stories
in
the
news
about
potential
data
protection
and
data
loss.”

“Businesses
need
a
combination
of
technology
and
clear
guardrails
to
take
advantage
of
the
benefits
while
managing
the
potential
risk,”
says
Jacobson. 

At
London
Tech
Week,
Darktrace’s
Chief
Executive
Officer,
Poppy
Gustafsson,
will
be
interviewed
by
Guy
Podjarny,
CEO
of
Snyk,
in
a
discussion
on
‘Securing
Our
Future
by
Uplifting
the
Human.’

They
will
discuss
how
we
can
future-proof
organisations
against
cyber
compromise
and
prepare
teams
to
fend
off
unpredictable
threats.

Commenting
ahead
of
London
Tech
Week,
Poppy
Gustafsson
says:
“CISOs
across
the
world
are
trying
to
understand
how
they
should
manage
the
risks
and
opportunities
presented
by
publicly
available
AI
tools
in
a
world
where
public
sentiment
flits
from
euphoria
to
terror.” 

“Sentiment
aside,
the
AI
genie
is
not
going
back
in
the
bottle,
and
AI
tools
are
rapidly
becoming
part
of
our
day-to-day
lives,
much
in
the
same
way
as
the
internet
or
social
media.”

“Each
enterprise
will
determine
its
own
appetite
for
the
opportunities
versus
the
risk.
Darktrace
is
in
the
business
of
providing
security
personalised
to
an
organisation,
and
it
is
no
surprise
we
are
already
seeing
the
early
signs
of
CISOs
leveraging
our
technology
to
enforce
their
specific
compliance
policies.”

“At
Darktrace,
we
have
long
believed
that
AI
is
one
of
the
most
exciting
technological
opportunities
of
our
time.
With
today’s
announcement,
we
are
providing
our
customers
with
the
ability
to
quickly
understand
and
control
the
use
of
these
AI
tools
within
their
organisations.” 

“But
it
is
not
just
the
good
guys
watching
these
innovations
with
interest

AI
is
also
a
powerful
tool
to
create
even
more
nuanced
and
effective
cyber-attacks.
Society
should
be
able
to
take
advantage
of
these
incredible
new
tools
for
good,
but
also
be
equipped
to
stay
one
step
ahead
of
attackers
in
the
emerging
age
of
defensive
AI
tools
versus
offensive
AI
attacks,”
says
Gustafsson. 

To
complement
its
core
Self-Learning
AI
for
attack
prevention,
threat
detection,
autonomous
response,
and
policy
enforcement,
the
Darktrace
Cyber
AI
Research
Center
continually
develops
new
AI
models,
including
its
proprietary
large
language
models,
to
help
customers
prepare
for
and
fight
against
increasingly
sophisticated
threats.
These
models
are
used
across
the
products
in
Darktrace’s
Cyber
AI
Loop. 

Jack
Stockdale,
Chief
Technology
Officer,
Darktrace,
also
comments:
“Recent
advances
in
generative
AI
and
LLMs
are
an
important
addition
to
the
growing
arsenal
of
AI
techniques
that
will
transform
cyber
security.” 

“But
they
are
not
one-size-fits-all
and
must
be
applied
with
guardrails
to
the
right
use
cases
and
challenges.”

“Over
the
last
decade,
the
Darktrace
Cyber
AI
Research
Center
has
championed
the
responsible
development
and
deployment
of
a
variety
of
different
AI
techniques,
including
our
unique
Self-Learning
AI
and
proprietary
large
language
models.” 

“We’re
excited
to
continue
putting
the
latest
innovations
in
the
hands
of
our
customers
globally
so
that
they
can
protect
themselves
against
the
cyber
disruptions
that
continue
to
create
chaos
around
the
world,”
says
Stockdale.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.