As
all
things
(wrongly
called)
AI
take
the
world’s
biggest
security
event
by
storm,
we
round
up
of
some
of
their
most-touted
use
cases
and
applications
Okay,
so
there’s
this
ChatGPT
thing
layered
on
top
of
AI
–
well,
not
really,
it
seems
even
the
practitioners
responsible
for
some
of
the
most
impressive
machine
learning
(ML)
based
products
don’t
always
stick
to
the
basic
terminology
of
their
fields
of
expertise…
At
RSAC,
the
niceties
of
fundamental
academic
distinctions
tend
to
give
way
to
marketing
and
economic
considerations,
of
course,
and
all
of
the
rest
of
the
supporting
ecosystem
is
being
built
to
secure
AI/ML,
implement
it,
and
manage
it
–
no
small
task.
To
be
able
to
answer
questions
like
“what
is
love?”,
GPT-like
systems
gather
disparate
data
points
from
a
large
number
of
sources
and
combine
them
to
be
roughly
useable.
Here
are
a
few
of
the
applications
that
AI/ML
folks
here
at
RSAC
seek
to
help:
-
Is
a
job
candidate
legitimate,
and
telling
the
truth?
Sorting
through
the
mess
that
is
social
media
and
reconstructing
a
dossier
that
compares
and
contrasts
the
glowing
self-review
of
a
candidate
is
just
not
an
option
with
time-strapped
HR
departments
struggling
to
vet
the
droves
of
resumes
hitting
their
inboxes.
Shuffling
off
that
pile
to
some
ML
thing
can
sort
the
wheat
from
the
chaff
and
get
something
of
a
meaningfully
vetted
short
list
to
a
manager.
Of
course,
we
still
have
to
wonder
about
the
danger
of
bias
in
the
ML
model
due
to
it
having
been
fed
biased
input
data
to
learn
from,
but
this
could
be
a
useful,
if
imperfect,
tool
that’s
still
better
than
human-initiated
text
searches. -
Is
your
company’s
development
environment
being
infiltrated
by
bad
actors
through
one
of
your
third
parties?
There’s
no
practical
way
to
keep
a
real
time
watch
on
all
of
your
development
tool
chains
for
the
one
that
gets
hacked,
potentially
exposing
you
to
all
sorts
of
code
issues,
but
maybe
an
ML
reputation
doo-dad
can
do
that
for
you? -
Are
deepfakes
detectable,
and
how
will
you
know
if
you’re
seeing
one?
One
of
the
startup
pitch
companies
at
RSAC
began
their
pitch
with
a
video
of
their
CEO
saying
their
company
was
terrible.
The
real
CEO
asked
the
audience
if
they
could
tell
the
difference,
the
answer
was
“barely,
if
at
all”.
So
if
the
“CEO”
asked
someone
for
a
wire
transfer,
even
if
you
see
the
video
and
hear
the
audio,
can
it
be
trusted?
ML
hopes
to
help
find
out.
But
since
CEOs
tend
to
have
a
public
presence,
it’s
easier
to
train
your
deep
fakes
from
real
audio
and
video
clips,
making
it
all
that
much
better. -
What
happens
to
privacy
in
an
AI
world?
Italy
has
recently
cracked
down
on
ChatGPT
use
due
to
privacy
issues.
One
of
the
startups
here
at
RSAC
offered
a
way
to
make
data
to
and
from
ML
models
private
by
using
some
interesting
coding
techniques.
That’s
just
one
attempt
at
a
much
larger
set
of
challenges
that
are
inherent
to
a
large
language
model
forming
the
foundation
for
well-trained
ML
models
that
are
meaningful
enough
to
be
useful. -
Are
you
building
insecure
code,
within
the
context
of
an
ever-changing
threat
landscape?
Even
if
your
tool
chain
isn’t
compromised,
there
are
still
hosts
of
novel
coding
techniques
that
are
proven
insecure,
especially
as
it
relates
to
integrating
with
mashups
of
cloud
properties
you
may
have
floating
around.
Fixing
code
with
such
insights
driven
by
ML,
as
you
go,
might
be
critical
to
not
deploying
code
with
insecurity
baked
in.
In
an
environment
where
GPT
consoles
have
been
unceremoniously
sprayed
out
to
the
masses
with
little
oversight,
and
people
see
the
power
of
the
early
models,
it’s
easy
to
imagine
the
fright
and
uncertainty
over
how
creepy
they
can
be.
There
is
sure
to
be
a
backlash
seeking
to
rein
in
the
tech
before
it
can
do
too
much
damage,
but
what
exactly
does
that
mean?
Powerful
tools
require
powerful
guards
against
going
rogue,
but
it
doesn’t
necessarily
mean
they
couldn’t
be
useful.
There’s
a
moral
imperative
baked
into
technology
somewhere,
and
it
remains
to
be
sorted
out
in
this
context.
Meanwhile,
I’ll
head
over
to
one
of
the
consoles
and
ask
“What
is
love?”