Years ago I worked at a company that made a tool called PostRank, which used “social engagement” to determine what content online was worth reading. 

People’s interests were bigger and broader than their free time – think overflowing RSS readers. So this ranking functionality helped determine what was worth time, attention and amplification.

At that time, what we called “social engagement” meant people’s reactions to things they saw or read: likes, shares, comments, retweets, etc. These actions were also weighted for scoring based on the amount of effort involved. 

Ultimately, each piece of content got a score out of 10. You could filter out anything under 7, for example, and not even see it.

These days this system seems primitive, but a decade or more ago it was at the cutting edge of trying to measure, organize and make sense of the Wild West of the exploding social web. 

While it wasn’t an explicit intention at the time, the analysis did lean to the positive. There weren't specific value judgements or considerations of writing quality, humour, political bias, social sensitivity, etc. But the general assumption was that you usually engaged with or shared the stuff you liked.

I mean, even back then reading the comments was a bad idea, but perhaps we hadn’t yet perfected the hateread/follow, and you don’t click “like” on something you think is awful. 

But it makes you wonder, if such a tool was being created today, how would it be different? I highly doubt that it would be based on filtering for the “good” stuff. It would much more likely be focused on enabling better management of the “bad” stuff. 

So with that in mind… a couple of weeks ago I’m at an 85 Queen event at the Kitchener Public Library: “Hidden Networks: From Trump to Harry Potter to Bitcoin.” It was presented by Ryerson professor and University of Waterloo alumnus, Dr. Anthony Bonato.

I no longer recall, exactly, what info nugget brought this idea to mind. There was some analysis of Trump’s tweets, so it was likely that. (For reasons that should be clear shortly.)

My brain started to chew on the notion of PostRank for today. And from there, the idea of social engagement analysis explicitly for... negative engagement. 

The first question that came to mind regarding this imaginary tool was whether it would make more sense for it to analyze how negatively skewed the content itself was, or negative engagement with content of any kind. I suspect the latter would be easier than the former.

Social engagement still offers comparatively little in the realm of “dislike” functionality. You might share something that pissed you off or made you laugh.

But nowadays we have Big Data and AI, so presumably algorithms could be trained to take a pretty good crack at analyzing engagement language and format, previous social activities, etc. in order to tell how you’re engaging with content.

On the surface, a tool that could identify and filter out content negativity would be really nice some days. Like… during a federal election campaign. Kitten videos for all!

Thing is, once people filtered out the bad stuff… would they ever turn the filters off again? The world has plenty of ugly and contentious stuff in it, but we still have to live here. 

Permanently ignoring and blocking it does nothing to work toward improving that. And even being able to try to ignore or block it is an act of staggering privilege.

Of course, one could also filter for the negative stuff. Just gorge your brain on a diet of hate, dog whistles, conspiracy theories, bad reviews and calls to hurtful action. Find more people who believe what you believe and turn an angry individual into an angrier community. Cuz we need more of that these days...

Backing up a bit, it does beg the question of what constitutes negativity. What are the weighting criteria? Does just disagreeing with something make it “bad”? If there’s a moral judgement to be made, who gets to make it? “Hate speech” or “free speech”? “Lunatic rantings” or “government coverup”? 

Should we really be filtering out reporting on atrocities against civilian populations, or a high-profile sexual assault scandal, or a friend’s cancer treatment journey because it’s too sad? Uhh… it depends…?

Realistically, though. We already filter. Everything. On and offline we tend to surround ourselves with similar views and opinions. Social algorithms advance that, often without us even being aware of it. Let the algorithms run the show long enough and you can easily end up down a dark and ugly rabbit hole.

Let us also consider how PostRank met its end. (Well, acquisition, but really its end.) It was acquired by Google to beef up their Analytics tools. Back then, Google’s tagline was still “Don’t be evil.”

Nowadays, though, many fewer people trust the tech giants. But that’s who is most likely to have the resources to access All The Data and build and run large-scale analytics to shape our online experiences. 

Originally, a tool like PostRank seemed helpful and wasn’t particularly controversial. But today? I could pretty much guarantee any *Rank would be. No matter who created it, but especially if it was in the hands of any of the tech giants. Or a government. Or anyone with an agenda. (Who doesn’t have an agenda these days?)

A decade ago, PostRank was a tool with limited scope and application for a much smaller internet. But even then, it simply wasn’t possible to analyze, rank, and filter even a decent portion of it. People had to individually define and attack their problem.

Nowadays the power, resources and expertise to attack The Problem is much greater, but any entity that could tackle it would end up with skewed results, built-in biases and hidden backdoors. We’ve just seen too much previous evidence.

Perhaps we go back to the beginning. We can’t process the entire internet, but when we’re online we – ourselves – can consider what’s worth our time, attention and amplification. We can reset our filters and admit that not everything we’re uneducated about or disagree with is automatically “bad.”

We can say no to being spoon-fed by algorithms. We can always be considering the agenda of content creators and platforms. No one asked for permission or consent to make the internet what it is today. So we don’t have to ask to take it back, either.