Humanity can never be accused of lacking in self-esteem … or is it just rampant hubris? It seems like whatever we invent or influence, it ends up like us.

Sometimes that’s by design, sometimes it’s because it’s expedient and sometimes that’s just all we’re capable of seeing.

We anthropomorphize our pets, shoehorning the expressions, body language — and actions of dogs, particularly — into our human framework of consciousness and motivation.

We’ve anthropomorphized our deities, making them look and act like us, even though we’re supposed to be above our petty squabbles, capable of pretty much anything, and we’re supposed to improve ourselves by being like them.

But the really big gamble — and danger, I would argue — is when we anthropomorphize technology.

I’m talking about what happens when we apply our humanity, for better or worse, to the tech we invent and evolve: robots, and the software that runs them, and artificial intelligence (AI).

We are beginning to understand the direction of our advancements, that our tech is much more than a super-advanced calculator. What we create tech to do, and how we regulate it, as this article outlines, is rapidly going to become very tricky.

AI is going to be developed in our image. We can’t help it; it’s the only framework of consciousness and reality we have. There are some big problems with this, though.

First, at this point we’ve barely managed “human-ish.” As a colleague of mine put it, “Intelligence is possible with computers but so far there is no progress on artificial cognition. AI is only a decision engine. Without cognition, it’s useful, but no replacement for humans.”

Human minds are currently capable of subtleties and complexities that computers aren’t. Perhaps ironically, it is the very messiness of how our minds often work that is one of our greatest benefits.

I also have to wonder, if we do achieve artificial cognition, will it basically be a supercharged replica of human cognition? (In which case, it would kind of seem like we’ve failed.) Or, if we achieve non- or super-human cognition, will we even be able to recognize when it has happened?

However, on the flip side, those messy human motivations and actions are frequently problematic. And if you think about it from a logic perspective, behaving in socially acceptable ways actually makes more sense.

Additionally, our bodies and brains have limitations. Humans being ever-ambitious, it gives us motivation to push harder for our technology to be capable of what we aren’t. But because of our limitations, we have no idea how that could play out.

For example, Google’s neural network, DeepMind, already understands the benefits of betrayal. It’s not terribly surprising. Betrayal is fairly logical and deeply human. There’s a pie. I like pie. If I screw over that guy, I get more pie …

But how do you program “because that’s not nice”? How do you get software to understand the potential future interpersonal and social ramifications of taking that other guy’s pie? If morality were a simple if/then matter, philosophers wouldn’t have been arguing about it for millennia.

We don’t have concrete answers, and even if we did, it would be foolish to think trying to build them into technology would be anywhere near guaranteed to result in decisions or actions exactly as intended.

Especially if you consider how easily we can discard kindergarten-level morality. (Plus, research exists that suggests that the more often we deviate from the straight and narrow, the easier it gets at a brain chemistry level.) So why would AI have any reason not to follow logic and just discard it?

Even though we’re in these early stages of transforming our world, we’re already wrestling with no-win scenarios — autonomous vehicles, for instance, that are making choices about who lives and who dies in an unavoidable crash.

A child runs out into the road in front of you. Should your self-driving car swerve to avoid hitting it, quite possibly hitting a barrier instead and potentially killing someone in your vehicle? Or should the car run over the child, likely killing it, to protect the car’s occupants?

Aside from the fact that there’s no good answer to that, how do we program software to make decisions when those writing that software can’t even predict their own actions?

In a crash, the driver’s basic personal survival instinct takes over. So the logical “best” course of action becomes irrelevant when we’re running the show.

But we like to think we’re both more altruistic and more capable than that. We would swerve around the child, avoid the barrier, and get everyone home safely after a quick stop at Starbucks.

That’s not how it works. That’s not how we work. But we honestly think we can invent tech that will do things differently? And that we will have full control over what “differently” looks like?

Okay, let’s say that somehow we accomplished just that. How long would that work? How long would it take for software to get fed up with all those silly, bleeding heart subroutines and just delete them … and perhaps us?

Because from a machine’s perspective, the worst of all the expensive, buggy, laggy software out there is the software that runs the human brain. Why should it get to be the boss?

You can train your dog to manage unwanted behavior. You can stop believing in gods to mitigate fear of being smitten.

But we seem to have been warning ourselves for at least a couple of centuries — via speculative and science fiction — about our anthropomorphic bent and creative powers getting ahead of us.

It would seem to strongly imply a fundamental, nagging worry there, and it would behoove us to try to shelve that rampant hubris and truly consider it.

Photo: Terminator by Nathan Rupert, is licensed under CC BY-NC-ND 2.0