Fake news – why do people believe it? | WeLiveSecurity

In
the
age
of
the
perpetual
news
cycle
and
digital
media,
the
risks
that
stem
from
the
fake
news
problem
are
all
too
real

Every
day
brings
a
deluge
of
news
content
that
competes
for
our
attention

Fake news – why do people believe it? | WeLiveSecurity

In
the
age
of
the
perpetual
news
cycle
and
digital
media,
the
risks
that
stem
from
the
fake
news
problem
are
all
too
real

Every
day
brings
a
deluge
of
news
content
that
competes
for
our
attention
and
spans
everything
from
politics,
health,
sports,
climate
change
to
the
war
in
Ukraine.
The
endless
amount
and
breadth
of
information

which
is
instantly
available
as
news
articles,
video
clips,
photos
or
other
media
on

news
websites,
social
media
platforms,
television,
radio
and
other
sources


can,
and
often
does,
feel
overwhelming.
Is
it
any
wonder
that
so
many
of
us
struggle
to
cope
with
information
overload
and
even
with
discerning
facts
from
fiction
online?

Recently,
much
of
the
global
news
cycle
has
been
rightly
focused
on
the
conflict
in
Ukraine.
It
started
with
satellite
images
of
army
movements
alerting
of
the
risk
of
a
possible
Russian
invasion.
Then,
in
the
small
hours
of
February
24th,
grisly
footage
began
to
pour
in
from
Ukraine
as
citizens
took
to
social
media
to
post
videos
and
photos
of
tanks
rolling
into
streets
and
rockets
falling
from
the
skies,
leaving
destruction
in
their
wake.

Ever
since,
we’ve
all
been
able
to
watch
the
war
play
out
on
our
phones
in
previously
unseen
detail;
it’s
not
for
nothing
that
the
war
has
been
nicknamed
the
first
TikTok
war
”.
The
people
of
Ukraine
can
use
the
reach
of
platforms
like
TikTok,
Twitter
and
Instagram
to
show
the
world
what
they’re
going
through.
Indeed,
almost
overnight,
some
of
these
apps
went
from
featuring
dancing
videos
to
showing
war
scenes
and
appeals
for
humanitarian
support,
attracting
countless
views
and
shares
in
the
process.
But
both
sides
of
the
war
have
access
to
these
platforms,
which
then
become
a
digital
battleground
to
influence
millions
of
people
worldwide.

But
do
we
always
know
what
we
are
really
looking
at?

Back
in
2008,
after
its
successful
coverage
of
the
2006
FIFA
World
Cup
that
included
videos
and
photos
taken
by
football
fans,
CNN
launched
iReport,
a
“citizen
journalist”
website.
Anyone
could
now
upload
their
own
content
online
for
the
big
audience.
At
the
time,
Susan
Grant,
executive
VP
of
CNN
News
Service

guaranteed

that
from
that
moment,
“the
community
will
decide
what
the
news
is”,
clarifying
that
the
publications
would
be
“completely
unvetted”.

CNN’s
belief
was
based
on
the
idea
that
citizen
journalism
is
“emotional
and
real”.
By

2012
,
100,000
stories
had
been
published
and
10,789
had
been
“vetted
for
CNN,
which
means
they
were
fact-checked
and
approved
to
be
broadcast”.
But
does
that
mean
the
other
89,211
were
real?
CNN
iReport
was
closed
in
2015.
Fast
forward
to
2022,
and
misinformation
is

one
of
the
biggest
problems

facing
society
worldwide.

What
we
believe
is
not
necessarily
real

According
to

MIT
research

that
was
published
in
2018
and
analyzed
news
shared
on
Twitter,
“falsehood
diffuses
significantly
farther,
faster,
deeper,
and
more
broadly
than
the
truth”,
even
after
bots
are
removed
and
only
real
human
interactions
are
considered.
The
results
are
striking
to
the
point
that
it
concluded
that
“falsehoods
were
70%
more
likely
to
be
retweeted
than
the
truth”.

A
handful
of
reasons
explain
our
complex
social
reality.
Indeed,
at
the
end
of
the
day
the
underlying
problem
may
be
something
we
are
all
victims
of:

cognitive
bias
.
While
it
may
be
useful
for
our
daily
lives,
if
only
by
allowing
us
to
remember
previously
learned
processes
and
recognize
familiar
situations,
it
may
leave
us
susceptible
to
mental
shortcuts
and

blind
spots
.
A
conversation
between
two
people
on
both
sides
of
the
war
in
Ukraine
is
a
clear
example:
both
sides
believe
they’re
acting
rationally
and
accuse
each
other
of
being
biased
and
of
not
grasping
the
complexities
of
the
reality.
From
this
point
on,
each
will
be
more
open
to
consume
news
that
confirms
their
perspective

even
if
the
news
is
fake.

While
we
generally
surround
ourselves
with
people
with
whom
we
share
the
same
world
views,
on
social
media
this
tendency
is
even
more
pronounced
and
make
us
way
more
likely
to
take
part
in
a
discussion.
Online
we
are
presented
with
a
filtered
reality,
built
by
an
algorithm
that
shapes
our
virtual
circumstance
and
feeds
us
with
validation,
whatever
ideas
we
have.
On
social
media,
we
are
inside
of
our
own
bubble,
the
place
where
we
are
always
right.
A
Facebook
whistleblower
Frances
Haugen
has

said
in
the
British
Parliament
 that
“anger
and
hate
is
the
easiest
way
to
grow
on
Facebook”.

The
enormous
amount
of
misinformation,
however,
is
no21st
century
trend.
Propaganda,
misinformation
and
fake
news
have
polarized
public
opinion
through
history.
Nowadays,
however,
it
is
instant
and
easily
sharable.

A

recent
article
in
Nature

reflected
on
the
experience
of
the
1918
pandemic
and
what
risks
a
future
outbreak
could
have.
The
author,
Heidi
Larson,
a
professor
of
anthropology
in
the
London
School
of
Hygiene
and
Tropical
Medicine,
predicted
that
“the
next
major
outbreak
will
not
be
due
to
a
lack
of
preventive
technologies”,
but
“the
deluge
of
conflicting
information,
misinformation
and
manipulated
information
on
social
media”.

Trolls
and
bots
lead
the
way

When
in
2018
Larson
wrote
about
spreading
misinformation,
she
used
a
term
we’ve
all
got
acquainted
to
recently:
super-spreaders,
just
like
with
viruses.
An
image
that
explains
how
the
internet
trolls
“stir
up
havoc
by
deliberately
posting
controversial
and
inflammatory
comments”.


But
while
some
of
them
are
just

bored
individuals

using
the
invisibility
cloak
of
internet,
others
do
this
as
a
job,
inflaming
public
opinion
and
disturbing
social
and
political
processes.
This
was
also
one
of
the
conclusions
of
two

Oxford

researchers
that
discovered
several
examples
of
how
both
government
and
private
companies
manage
“organized
cyber
troops”.
These
battalions
of

trolls
and
bots

use
social
media
to
shape
people’s
minds
and
amplify
“marginal
voices
and
ideas
by
inflating
the
number
of
likes,
shares,
and
retweets”.

So
how
does
social
media
deal
with
this?

Harder
than
knowing
the
people
behind
fake
news
is
understanding
what
we
can
do
to
manage
the
content
published
on
online
platforms.
For
the
past
decade,

The
New
Yorker

wrote
in
2019,
Facebook
had
rejected
notions
that
it
was
responsible
for
filtering
content,
instead
treating
the
site
as
a
blank
space
where
people
can
share
information.
Since
then,
fake
news
has
not
only
impacted
election
results,
but
also
brought
harm
to
people
in
real
life.


Twitter
,
Telegram
and
YouTube
have
also
been
heavily
criticized
for
their
approach
to
misleading
content,
with
some
governments
requiring
more
responsibility
and
even
considering
pushing
regulation
over
these
services
for
the
spread
of
banned
content
or
false
and
extremist
ideas.

In
January
2022,
fact-checking
websites
from
all
over
the
world
addressed
YouTube
with
an

open
letter
,
alerting
the
world’s
biggest
video
platform
of
the
need
to
take
decisive
action,
mainly
by
“providing
contexts
and
offer
debunks”,
rather
than
just
deleting
video
content.
The
letter
also
addressed
the
need
for
“acting
against
repeated
offenders”
and
applying
these
efforts
“in
languages
different
from
English”.

What
can
be
done?

Larson
says
“no
single
strategy
works”,
suggesting
a
mix
between
educational
campaigns
and
dialogue.
And
while
some
countries
do
well
on
digital
literacy
and
education,
others
don’t.
The
disparity
is
big,
but
we
all
converge
on
the
same
shared
virtual
space
where
no
one
really
wants
to
dialogue,
listen
or
engage.

But
if
digitally
literate
people
are
“more
likely
to
successfully
tell
the
difference
between
true
and
false
news”,
everyone
is
just
as
equally
likely
to
share
fake
news
because
of
how
simple
and
immediate
“a
click”
is.
This
was
the
conclusion
of
another
recent MIT
study
,
making
the
case
for
other
types
of
tools.

This
is
where
fact-checking
platforms
come
in,
researching
and
evaluating
the
quality
of
information
included
in
a
news
piece
or
a
viral
social
media
post.
However,
even
these
resources
have
their
own
limitations.
As
reality
is
not
always
straightforward,
most
of
these
websites
follow
a
barometer-like
pointer
ranging
from
“false”
to
“mostly
false”,
“mostly
true”
to
“true”.
Likewise,
the
validity
of
this
research
can
also
be
discredited
by
those
who
don’t
see
their
ideas
confirmed,
giving
fakes
an
almost
endless
lifespan.

But
we

also
have
a
role
to
play

when
it
comes
to
discerning
the
real
from
the
fake,
and
in
the
context
of
a
war,
this
‘individual
work’
takes
on
an
even
greater
importance.
Watch
the
video
by
ESET Chief
Security
Evangelist
Tony
Anscombe
to
learn
a
few
tips
for
telling fact
from
fiction.

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.