Many people in my social sphere were equal parts amazed and creeped out by the debut and demonstration of Google Assistant, which is capable of making phone calls, holding a conversation and scheduling appointments. I admit the idea of never having to make phone calls again does appeal.

Presumably, the idea that using AI for customer service was ever divisive or weird will one day seem charmingly archaic. Assuming, of course, that we’re not headed for an ’80s dystopian movie future where the machines try to slaughter us all .…

If you had a conversation with a bot but didn’t know it was a bot, and the issue was satisfactorily resolved, you wouldn’t have cause to care. (I’ll be using bot and AI here interchangeably for convenience.)

Perhaps you might care if you found out later. Humans don’t tend to react well to identity deception.

Though I suspect an unsatisfactory customer experience would give you much greater cause for annoyance than the agent of that experience, whether bot or human.

Of course, it begs the question of how much “machine learning” you should subject your (potential) customers to while teaching your AI to work better, without providing an escape hatch to access a human if needed.

You know there will be companies that go all-in on automated admin or support channels. It would be a gamble on the efficiency and resource savings outweighing the potential loss of customers. It would also be strong incentive to get the tools to be really good, really fast.

Thinking back to the earlier question, though, is interacting with a bot without disclosing that it’s a bot really deception? Why would there be an assumption of bad faith just because a non-human is performing an activity usually done by humans, especially if it’s just as efficient, or more so?

Is the source of the weirdness the bot entity itself, the specific activity it’s being used for, or the fact that something that was once producible only by humans (a voice, a conversation) isn’t solely “human” anymore? Are we uncomfortable moving one step closer to failing a societal-wide Turing test?

Perhaps we just need to be convinced that AI can work as efficiently (or more so) than people. Or we just need to get used to bots doing things for us that we’ve never liked doing anyway. A few positive interactions should do the trick. Doesn’t still feel weird talking to Siri or Alexa, does it?

Now, I would never have accepted my current job if our company’s customer-facing work required being on the phone. At the same time, though, there are those for whom making phone calls is vastly preferable, or even necessary based on accessibility needs.

Wouldn’t AI make this a win-win all around? Those who don’t want to use the phone don’t have to, and those who need or want to still can, and have good customer experiences.

We’ve also only seen a thin sliver of the potential public applications of AI tools. Whether it’s Google Assistant or the work of any number of startups, so far everything seems to be administrative or customer service-centric. While we don’t have to go all in on all AI, all the time, surely there are a million other human-benefitting applications.

Based on experience, the more easily people can get things done without having to read much or do it themselves is a good thing. (People particularly don’t tend to read well online.) There will always be people who refuse to read the manual or have comprehension troubles. You will never have content, documentation, or tools that work perfectly for everyone.

If those things fail and someone has a less than seamless experience with your company, whether that’s your “fault” or theirs is irrelevant if your goal is to be a successful company with, y’know, paying customers. This is of course assuming that AI-driven tools can effectively help replace reading or DIY.

Should we have bot disclaimers? What if you answered the phone and were greeted with: “Hello, this is Google Assistant on behalf of …” Or when the chat window opens you get: “Hello, this is XYZ Company’s AI-driven virtual assistant. What can I help you with today?”

Although, from the Google Assistant demo it really seemed like a big part of the schtick (and the tech goal) was that it was supposed to be indistinguishable from a person. I suspect there’s a lot of work and resources dedicated to that goal.

Of course, there’s one other issue regarding the intersection of AI and human behaviour that bears mentioning. People are often not on their best behaviour online.

Now, occasionally this can result in something charming, like the polar research ship almost named Boaty McBoatface. But most of the time … really not charming.

I have no idea what the mischievous and hacking inclined could get up to with widespread public-facing AI tools. I’m old and a front-end kinda gal, so I suspect even my wildest imaginings would be way too conservative. But my gut says it could get ugly. Really ugly.

Given some of our biggest tech companies are notoriously slow, disingenuous, or apathetic about abuse of their platforms, I don’t hold out a lot of hope for AI tools to be better or more securely built, or for abuse of them to be addressed and fixed any faster, should it come to pass.

So where does that leave us? Well, AI is already here, and more of it will be coming. I’ve no doubt about that. For better or worse, its programming and its learning come from us. Its failings are our failings. That may not make much difference when booking a hair appointment, but it will for bigger and more important things.

Just remains to be seen whether expecting our bots to be better people than we are is a fool’s errand or not.

M-Theory is an opinion column by Melanie Baker. Opinions expressed are those of the author and do not necessarily reflect the views of Communitech. Melle can be reached on Twitter at @melle or by email at me@melle.ca.