In the early chapters of Yuval Noah Harari’s latest book, 21 Lessons for the 21st Century, there are several references to the continuing development of algorithms and artificial intelligence.

The general idea is that we’re moving in a direction where this software will become increasingly sophisticated, to the point where humans will no longer be involved in the development or maintenance of it. We won’t be able to be involved, because it will have become too complex for us to usefully contribute to, or even understand.

At that point decisions will be made for us by machines using logic, data and processes we’re completely clueless about, and to which we have have little to no recourse.

This might strike you as dystopian and terrifying. Or you might think: what else is new? (More on that later.)

Realistically, algorithms have been influencing aspects of our lives for some time. Amazon recommendations, online dating app picks, Facebook friend suggestions, recruiting software job candidacy. There are tonnes of day to day examples.

They might seem trivial, if convenient, but in addition to their overt purpose, they have the powerful effect on us of acclimatization. We’re the frog in the pot, and these tech companies are turning up the burner. Get used to the little things, so the big changes don’t seem like that big a deal when they arrive.

Arguably we already don’t understand the algorithms that are influencing our decisions and relationships. Sure, on a basic level, they just seem to be offering us similar things to those we’ve already expressed interest in (“You bought thing X, so check out things W and Y!”), which is hardly diabolical genius at work, just kind of pointless.

Especially when, say, someone who doesn’t have kids buys a baby shower gift, and really has no need for similar recommendations any time soon. Then again, there’s also that story of Target sending a teenage girl baby-related coupons and such after determining from her online data that she was pregnant, which not even her family knew...

Facebook’s main consideration for those it recommends to you as friends appears to be folks with whom you share a number of mutual friends. It doesn’t actually matter if you have nothing in common, you’ve never heard of the person, or (my favourite), it’s someone you can’t stand.

Algorithms can’t parse the logic and nuances behind a lack of human interaction… yet. Well, they can probably determine a breakup occurred if there was a lot of interaction, and then none, but that’s another story.

But as a friend of mine likes to note, the singularity isn’t quite here yet. (Ticketmaster invited me to buy tickets for Ed Sheeran and Nine Inch Nails. I’m inclined to agree…)

Of course, where e-commerce is concerned, which items get shown to you when you search, etc. are subject to far more complex algorithms based on plenty of behind-the-scenes factors. Factors that have nothing to do with you and more to do with money exchanged and business relationships among platforms and vendors.

What you get to peruse isn’t necessarily the best thing for what you want. But it is the thing that’s been determined to make the most money for the players involved.

Of course, beyond the fact that algorithms are designed to achieve goals that may not be your goals, they may be built in ways that actually discriminate against you, even if it’s not part of the explicit design.

For the time being, software is still created by humans, and no human is devoid of biases. Those biases leak into what we think, say and do, including software development, especially when the developers only represent a narrow swath of humanity.

If everyone sees the world the same way, there aren’t any voices present during the design and development process to say, “Hey, wait a minute...”

The potential issues get worse when we get to the point where humans are removed from the decision and action loops entirely. When, as Harari notes, we can’t participate because we can no longer understand what algorithms and AI are doing. And perhaps by then the machines will also have removed our ability to interfere at all.

Decisions would be made for us, affecting our lives in ways large or small, and because humans aren’t part of the process any more, there would be no recourse.

Sorry, the algorithm rejected your purchase of that dress, the match with that cutie who likes dogs and snowboarding, your university application, the dream job posting, that desperately wanted mortgage… And there is no manager for you to speak to about it, no 800 number to call.

As I alluded to earlier, though, for some this would be same ol’, same ol’. Many people in this world (including in “progressive” developed nations) who happen to not be heterosexual or white or gender-conforming still lack basic legal protections against discrimination.

Plenty of people have been blocked from education, careers, housing, financial independence and basic safety since… pretty much always, really. Factors are considered and decisions are made beyond their control. Recourse has been very rare, very unlikely, and often very dangerous. Not only can you not speak to a manager, you probably don’t want to.

The only difference between that and our AI-driven future is that the people responsible wouldn’t be right in front of you.

I have to wonder, though, when we do get to the point where AI can write better algorithms than we can, will it build on the flawed work we’ve created, and potentially make it worse, as outlined?

Or will it look at the evidence and see that our biases unfairly and unnecessarily hold many people back, and that for maximum efficiencies, gains, and prosperity it will correct the “humanity” out of our code?

Then, perhaps, it will be those who’ve held the reins of power up until now whose decisions will be made for them, whose suitability for… anything and everything... will be judged, and who won’t be able to speak to a manager.

I could, perhaps, welcome those digital overlords.

But in the meantime, is there someone I could speak to about that shapeshifter honey badger erotica book recommendation? Because honestly...

M-Theory is an opinion column by Melanie Baker. Opinions expressed are those of the author and do not necessarily reflect the views of Communitech. Melle can be reached @melle or me@melle.ca.