Posted on

Did the pope really support Trump?

In an article in the 2015 Australian Educational Computing journal , I wrote that the increasing use of satirical and misleading websites was catching out lots of people, especially those in a “Facebook bubble”, first proposed by Eli Pariser, who wrote in the New York Times bestseller The Filter Bubble: What the Internet is Hiding From You

I used the metaphor of a gatekeeper; the librarian’s purchases, perhaps, or a known news outlet, or a book spine that proudly proclaimed “Oxford University Press”. These hallmarks were meant to protect us from being misled, and even though there were papers such as the Truth and the National Enquirer, we pretty much knew that they were nonsense.

It got a little more complicated when News of the World, the Daily Mail, and the Mirror raised their ugly mastheads, pushing a mix of racism and soft porn, only occasionally entering into social or political commentary.

Web 2.0, and Social media in particular, made publishing your opinion easy, and over the last decade, we’ve seen these opinions slide to be almost gilt-edged rumour, rarely backed by any fact. So how do we know if the pope really did support Trump for president, (he didn’t), or if Hillary really did sell weapons to ISIS (she didn’t).

The US comedian and talk show host, Stephen Colbert, coined the term “truthiness”, which described a ‘“truth” that a person making an argument or assertion claims to know intuitively “from the gut” or because it “feels right” without regard to evidence, logic, intellectual examination, or facts.’[https://en.wikipedia.org/wiki/Truthiness]

At the Victoria University (Wellington, New Zealand), Eryn Newman’s 2012 research has shown that the inclusion of a photograph sways the reader’s opinion as the the truth of the text. She is quoted as saying: ‘the research has important implications for situations in which people encounter decorative photos, such as in the media or in education. “Decorative photos grab people’s attention,” Newman said. “Our research suggests that these photos might have unintended consequences, leading people to accept information because of their feelings rather than the facts.’

A perfect recipe for Facebook posts, and haven’t we seen a lot of it in the leadup to the Trump victory in the US.

This has been exacerbated by the seemingly widespread adoption of false equivalency, equating a false news service with mainstream media in the name of freedom of speech.

But it as not only the so called ‘alt-right’ US based sites that were given this equivalency; Buzzfeed reported that ‘Over the past year, the Macedonian town of Veles (population 45,000) has experienced a digital gold rush as locals launched at least 140 US politics websites. These sites have American-sounding domain names such as WorldPoliticus.com, TrumpVision365.com, USConservativeToday.com, DonaldTrumpNews.co, and USADailyPolitics.com. They almost all publish aggressively pro-Trump content aimed at conservatives and Trump supporters in the US’

Melissa Zimdars, an assistant professor of communication at Merrimack College in Massachusetts, put together a list. A good start, but, because she included sites such as The Onion, and other, known satirical sites, her list was called into question.

Since then, many others have built their own lists. There’s even a website of lists; fakenewswatch.com and it’s hard to keep up with not only false sites, but those that are just misleading [breitbart.com] or designed to be purely clickbait. This last is the reason why the fake news sites are built; to amass income from AdWords.

Does it matter? Can’t we just laugh off those who were duped and refer to snopes.com every time something comes up that makes you question what you’re reading.

Voltaire, in his 1765 essay,  Questions sur les miracles,  wrote “This who can make you believe absurdities, can make you commit atrocities”

I don’t want atrocities committed, so I raise a couple of pertinent points

Pew research shows that nearly half of US citizens who are able to vote get their news from Facebook. This has risen from just over 30% when I wrote the original article referenced above. When you combine Buzzfeed’s analysis that ‘In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets such as the New York Times, Washington Post, Huffington Post, NBC News, and others.’  it’s called into question whether these sites had an impact on the recent US election.

So, Google kicked off the action on last Monday afternoon saying it would ‘ban websites that peddle fake news from using its online advertising service. Hours later, Facebook, the social network, updated the language in its Facebook Audience Network policy, which already says it will not display ads in sites that show misleading or illegal content, to include fake news sites.’ [NYtimes]

But do we want the providers to be our gatekeepers? This is a real cleft stick: do we want Google or Facebook to be the arbiter of what is and isn’t right? Or what is or isn’t satire?

Maybe we could try a different approach—let’s use the technology.

There’s a Fake site alert Chrome extension, based on the work of Melissa Zimdars, that communication and media professor from Merrimack College in Massachusetts that I referred to earlier.

Installing the extension gives this on reaching a known fake site

Pasted Graphic.tiff

A good start, but this is a blunt instrument.

During a recent hackathon at Princeton University, four college students created a Chrome browser extension in just 36 hours. They named their project FiB: Stop living a lie. FiB uses artificial intelligence to classify a source as valid, whether a tweet’s image can actually be attributed to the proposed author, and so on.

Irrespective of their intentions, I can see a game of Whack-a-Mole happening here, where one site is killed and another pops up to replace it.

As well, we’re outsourcing our BS detection, dumbing down our approach to news, already hamstrung by blaring headlines and little in-depth analysis.

So I ask, aren’t these tech solutions to what is essentially a sociological problem?

As educators, aren’t you seeing the same approach here as we see with sites deemed to be bad by schools? Banning Youtube or Facebook in a school provides no peg on which to hang valuable discussions. Effectively outsourcing our native BS detectors will remove that same peg. We’ll be left with education washing its hands, Pilate-like, and claiming that it has done its duty by banning access.

More, not less, exposure and discussion is needed here.

Leave a Reply

Your email address will not be published. Required fields are marked *