Meta says expanding E2EE won’t stop child abuse material detection

Meta
has
said
its
plans
to
extend
default
end-to-end
encryption
(E2EE)
to
all
its
messaging
services
during
2023
won’t
make
child
abuse
material
harder
to
censor
because
of
the
company’s
use
of
metadata
analysis
and
proactive
account
blocking.

<div>Meta says expanding E2EE won't stop child abuse material detection</div>

Meta
has
said
its
plans
to
extend
default
end-to-end
encryption
(E2EE)
to
all
its
messaging
services
during
2023
won’t
make
child
abuse
material
harder
to
censor
because
of
the
company’s
use
of
metadata
analysis
and
proactive
account
blocking.

The
Office
of
the
eSafety
Commissioner’s
acting
chief
operating
officer
Toby
Dagg
told
a
parliamentary
inquiry
that
once
Facebook’s
messaging
platform
adopts
E2EE,
that
the
US-based
“National
Center
for
Missing
and
Exploited
Children
(NCME)
has
said
that
it
goes
dark
for
their
purposes.”

The
“overwhelming
majority”
of
the
29.3
million
reports
of
abhorrent
material
NCME
received
in
2021
“were
made
via
the
Facebook
messenger
platform,”
Dagg
told
the
inquiry
into
law
enforcement
capabilities
in
relation
to
child
exploitation.

Australian
Border
Force
assistant
commissioner
James
Watson
also
told
the
inquiry
that
“encryption
technology”
had
frustrated

the
agency’s
device
searches
for
child
abuse
material
at
international
airports

“Our
officers
at
the
border
are
trained
to
use
certain
electronic
investigation
tools.
Those
tools
enable
us
to
hold
an
electronic
device
until
such
time
as
we’re
satisfied
as
to
the
contents
of
that,”
Watson
said.

“There’s
nothing
that
we
can
particularly
do
to
compel
the
production
of
passwords.

“That
means
our
officers
can
be
delayed
whilst
we
look
at
bypass
technologies
that
may
be
able
to
get
around
the
requirement
of
that
password.” 

However,
Meta’s
head
of
Australian
public
policy
Josh
Machin
rejected
the
notion
that
E2EE
would
prevent
harmful
content
detection. 

“Even
if
we
are
not
able
to
see
the
content,
there’s
some
pretty
effective
work
that
we’ve
been
able
to
do
to
analyse
the
metadata
based
on
the
behaviour
of
the
individuals
involved,”
Machin
told
the
inquiry. 

“A
user
who
is
part
of
a
sophisticated
criminal
enterprise
has
very
different
behaviour
on
encrypted
services
than
an
ordinary
user.”

Using
intelligence
from
“metadata
or
unencrypted
surfaces”,
Machin
said
Meta
focused
on
the
“removal
of
fake
accounts
and
blocking
people
from
being
able
to
[make]
contact
to
begin
with.”

Meta’s
regional
policy
director
Mia
Garlick
added
that
Meta’s
development
of
alternatives
to
breaking
encryption
had
meant
it
was
detecting
more
child
abuse
material
than
other
encrypted
services. 

“Obviously
iMessage
is
very
widely
used
in
Australia,
and
it’s
been
encrypted
for
a
while
now.
And
Apple,
in
2021,
according
to
NCME’s
latest
report,
only
made
160
referrals…whereas
if
you
compare
that
with,
say,
WhatsApp,
which
is
already
encrypted;
in
the
2021
reporting
timeframe,
made
1.37
million
[referrals]
and
that’s
been
steadily
rising,”
Garlick
said.

Garlick
said
Meta
“invested
so
significantly”
in
“other
techniques
and
measures”
to
protect
children
online
because
consumer
feedback
suggested
that
breaking
encryption
was
“an
unwarranted
intrusion
into
their
privacy.” 

“The
popularity
of
it
is
because
it
provides
safety
and
security
for
people’s
messages.
And
in
the
context
of
the
many
data
breaches
that
we’ve
seen,
we
can
really
see
the
value
of
encryption,”
Garlick
said.

“The
UN
Commission
on
Human
Rights
has
come
out
very
publicly
criticising
client-side
scanning
as
just
an
extension
of
surveillance.” 

Meta’s
defence
comes
a
week
after
eSafety
Commissioner
Julie
Inman
Grant
said
that
platforms
were

“doing
shockingly
little”

to
detect
child
abuse
material.

Grant
has
not
revealed
all
of
her
reservations
with
a
set
of
unpublished
industry
draft
codesto
deal
with
harmful
content
that

she
rejected
earlier
this
month
,
but
has
said
a
range
of
technologies
for
detecting
it
have
not
been
deployed
by
the
major
platforms.

During
senate
estimates
last
week,
she
referenced
a
report
by
her
office,
evaluating
Microsoft,
Skype,
Apple,
Meta,
WhatsApp,
Snap
and
Omegle’s
technologies
and
policies
for
blocking
child
abuse
material. 

According
to
the
report [pdf],
Meta
has
used
both
in-house
and
off-the-shelf
tools
to
detect
the
content.

Solutions
like
PhotoDNA
are
used
by
Meta
for
identifying
previously
confirmed
child
abuse
images,
and
Meta
is
the
only
platform
using
Google’s
Content
Safety
API
and
internally
built
classifiers
to
identify
new
material,
but
only
on
messaging
without
E2EE.


Instagram
and
Facebook
messaging
can
be
made
E2EE
and
WhatsApp
is
E2EE
by
default.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.