• Burnaby Beacon
  • Posts
  • The Facebook algorithm’s role in a Burnaby election candidate’s toxic post

The Facebook algorithm’s role in a Burnaby election candidate’s toxic post

A Burnaby federal election candidate’s toxic Facebook post with violent messaging towards journalists exemplifies issues with the platform that have recently been raised by a high-profile whistleblower, according to an SFU researcher.

Earlier this month, People’s Party candidate for Burnaby South Marcella Williams, who goes by Marcella Desjarlais on Facebook, posted a meme on her personal account with imagery from Nazi Germany.

The text overlaid on the image falsely claimed that during the Nuremberg trials, “even the media was prosecuted and put to death for lying to the public.” Twelve of 24 high-ranking Nazis charged with war crimes, none of whom was in the media, were executed in those trials.

In her now-deleted Facebook post, however, Williams wrote: “Should happen again!”

In replies to the post, Williams furthered her violent rhetoric, responding to one commenter who suggested putting the media in internment camps with: “yaaaaa!!! But, death is a better answer.”

Williams landed in fourth place in the Sept 20 federal election, with 3.2% of the vote. That was slightly ahead of Green Party candidate Maureen Curran (2.9%) but well behind the three major parties, with the Conservative candidate Likki Lavji grabbing 22.4% of the vote, Liberal candidate Brea Sami taking 30.4%, and NDP Leader Jagmeet Singh leading with 40.3%.

Increasing online harassment

Her comments come amid a racist, misogynist campaign to harass journalists around the country.

PPC Leader Maxime Bernier took issue with questions from journalists about concerns that the party has racist elements.

In a recent episode, the Canadaland podcast produced an extensive list of people tied both to the PPC and to white nationalist or white supremacist organizations.

In response to the media inquiries, Bernier published the journalists’ email addresses and instructed his supporters to “play dirty.” And so they did: journalists across Canada—many, if not mostly, women and journalists of colour—received hateful, vitriolic emails.

This isn’t entirely new. The online world has been a toxic space for as long as there has been an online world. But the dark side of the internet has exposed itself evermore over the past decade, to a point that seems increasingly unsustainable.

So where did this come from?

Marcella Williams was the People's Party of Canada's candidate for Burnaby South in the September 2021 election.

The answer, as with everything, is complicated. There are socio-economic and personal factors that affect how we treat each other online, notes SFU linguistics professor Maite Taboada.

“Researchers have pointed to inequality, to the economic stress that we are all suffering from, and that has been exacerbated by COVID, the uncertainty of what the future holds,” Taboada says, also noting the increasingly dire outlook from the climate crisis.

“A lot of those stressors in our society lead us to anger and to really, pretty awful abuse online.”

But one factor that has become evermore clear over the last several years has been an area Taboada has spent the last five years researching: toxicity on social media.

More specifically, research has continually shown that YouTube and Facebook algorithms are a vector for radicalization and hatred. This is a result of people being exposed to angrier and more hateful content over time.

Social media networks have come under fire for years, now, about how their platforms propagate hate and have even been a vessel for inciting genocide.

Facebook whistleblower

But recent revelations from whistleblower Frances Haugen have largely confirmed what many already suspected: that the algorithm actively pushes people to more aggravating content.

Haugen explains on CBS program 60 Minutes that the algorithm’s continual push to put more engaging content in front of users is what drives it.

“[Facebook’s] own research is showing that content that is hateful, that is divisive, that is polarizing—it’s easier to inspire people to anger than it is to other emotions,” Haugen says on the program.

“Facebook has realized that, if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, and they’ll make less money.”

Taboada quotes Ethan Zuckerman, who called his invention of the pop-up ad the “original sin” of the internet.

“Even before social media existed, they had the brilliant idea that we should support [websites] with advertising. So then you get a series of bad incentives,” Taboada says.

Haugen makes a similar point: “No one at Facebook is malevolent, but the incentives are misaligned.”

Facebook founder Mark Zuckerberg pushed back on Tuesday, calling Haugen’s testimony before congress “just not true.”

“Many of the claims don’t make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?” Zuckerberg writes in a blog post.

“The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content.”

‘The banality of online toxicity’

Nevertheless, there’s a growing tide of both research and public opinion that Facebook is an accelerant for online toxicity.

“There’s certainly a problem that we’ve been manipulated by the way—and it’s not just Facebook—but the way many platforms present content to us,” Taboada says.

Sites like Upworthy showed in Facebook’s earlier days that there is an appetite for positive content as well. But Taboada says people don’t stick with the content after that.

In the 60 Minutes interview, Haugen gives a specific example: the exposure of teenage girls to content related to eating disorders. Facebook’s own internal research, Haugen says, has found that girls who are exposed to content that glorifies eating disorders get more depressed, “and it actually makes them use the app more.”

“It cannot really be like any other private business because it controls so much of the flow of information, the way people organize, communicate.”

Photo: Dustin Godfrey

“What gets us glued to the screen is negative emotions,” says Taboada.

“Because there is so much toxicity and abuse online, it has become banal. I have called it the banality of online toxicity, and we’ve sort of gotten used to it.”

And the normalization of toxicity is just a self-perpetuating cycle, causing abusive behaviours online to appear more and more like the regular way of interacting, Taboada says.

It’s to a point where Taboada says she wasn’t surprised by Williams posting about executing journalists online—though she’s quick to note that, in researching the topic, she is especially exposed to abusive behaviours online.

And the lack of surprise from those kinds of comments in itself is problematic, she says—even if common, hateful and violent posts online shouldn’t simply elicit a shrug.

But she says the reaction to Williams’s post is a good sign.

“I think that other people are having reactions and that other people are saying, ‘We should not accept this,’ is … good,” Taboada says.

“Let’s not make it like, ‘Oh, that’s just what some voters say.’ ‘Oh, that’s what some parties say.’ ‘Oh, that’s what the PPC always says.’ I think we really clearly need to draw a line.”

So what can be done about it?

The Canadian Association of Journalists and many of Canada’s major news organizations co-signed a statement in support of journalists.

In another post, CAJ president Brent Jolly says the association is consulting with lawyers to “better understand the legal landscape with regards to hateful messages.”

“We are working to establish communication with the Royal Canadian Mounted Police (RCMP) and other law enforcement agencies to understand how they plan to address the dissemination of threatening messages,” Jolly writes.

“We are also reaching out to international press freedom groups to identify resources that can be easily deployed to assist Canadian journalists during these fraught times.”

CAJ further added that it’s working with Carleton University’s journalism school to organize a virtual summit on the matter.

“We recognize that this issue will not be solved easily or overnight. Nevertheless, we will continue to work steadfastly to ensure that law and policymakers understand how these ongoing threats impede a free press,” Jolly wrote.

‘That focus constrained the imagination’

For much of the last decade, the question has often been: how should we regulate speech? Should there be more enforcement of hate speech rules?

But such enforcement has often led to censorship of the very communities the rules are supposed to protect.

It’s not the first time the world has had to figure out regulating platforms for communication. Taboada points to the work of Heidi Tworek, an associate professor of history and public policy at UBC, on the entwined rise of radio and Germany’s Nazi Party, among other fascist regimes.

Tworek, who works on some projects with Taboada, says in a May 2021 paper that a focus by governments in recent years has been on speech laws.

Prime Minister Justin Trudeau, for instance, ordered ministers in 2019 to look at laws that would order the removal of hate speech from platforms within 24 hours.

But Tworek’s conclusion is less than encouraging.

“That focus constrained the imagination, frequently excluding bolder solutions related to inequality or even electoral reform,” she notes. “The offline world continues to offer horrifying examples of hate.”

The capital of a new nation

The talk about what to do with Facebook has caused many to think a little more outside the box. Two essays in particular, published in Bloomberg and The Atlantic, begin with a similar premise—that Facebook is effectively a country unto itself.

But after listing the various metrics in which Facebook has amassed power and influence, they come to strikingly different conclusions.

On the one hand, Bloomberg writer Ben Schott suggests Facebook and other companies should be given a seat at the UN. On the other hand, Atlantic writer Adrienne LaFrance says Facebook is “a hostile foreign power” that “requires a civil-defence strategy as much as regulation from the Securities and Exchange Commission.”

Similar discussions have focused on the power the platform has amassed and called for Facebook to be broken up as a company.

Taboada describes the problem as being ingrained in the fundamental structure of media—the advertising model. And that brings up the question: is reform possible in an industry predicated entirely on what makes it problematic in the first place?

“I definitely think that’s a conversation we need to have: what is the model for journalism?” Taboada says.

“Relying on advertising leads to these kinds of problems. And maybe we have to live with them. Maybe there’s an alternative. There are people thinking hard about, especially, funding local news.”

But when it comes to platforms like Facebook, she says the story is a bit different. Discussions about Facebook’s role in the public sphere have become increasingly frequent, with some suggesting it’s essentially a public forum.

“It cannot really be like any other private business,” Taboada says, “because it controls so much of the flow of information, the way people organize, communicate.”

Get Burnaby Beacon in your inbox.An in-depth understanding of the stories that affect Burnaby and beyond, every weekday.