As people generate more digital data than ever before through web browsing, social media use and countless apps, how does society ensure that tech companies and governments take an ethical approach to artificial intelligence?

That was the heart of the conversation as five AI experts from different backgrounds gathered at the University of Waterloo for the third instalment of the Critical Tech Talk series, produced by UW’s Critical Media Lab with support from Communitech.

“It’s kind of a wild west space that we’re going through right now,” said panelist Kem-Laurin Lubin, a UW PhD candidate who studies AI models and their use in judiciary, health care and education-based apps.

AI technology itself is “neither good nor bad” – it’s all about how we use it, she said, adding that the potential misuse of AI is fuelled by a lack of regulation, a lack of diverse voices in shaping the technology, and a big-tech mentality of “move fast and break things."

“What I think is critical at this point is to have some conversation around the regulations that can inform the proper usage of AI,” she said. “We have done a lot of damage through artificial intelligence – socially, physically, environmentally – and so the whole mantra of moving fast and breaking things needs a complete rethink.”

Ben Armstrong, a UW PhD candidate in computer science, said the “move fast and break things” mentality, popularized by Facebook CEO Mark Zuckerberg, has contributed to a general “erosion of trust” right across society.

“What concerns me most is the lack of social trust that we see from all of these divisive online places – Twitter is the prime example – and offline political division that can be, at least in part, attributed to platforms that have these algorithmically ordered posts,” he said.

Pointing to Facebook algorithms that prioritize posts that are likely to spark anger and other strong reactions, and Instagram technology that presents teenage girls with posts about body image, Armstrong said the use of AI in this way “just harms the fundamental social fabric of society.”

The moderator of the panel discussion – Marcel O’Gorman, Founding Director of UW’s Critical Media Lab – asked the panel, “How do you incentivize responsible tech? How do you incentivize ethical action?”

Both Armstrong and Lubin pointed to university-level courses that require students to discuss ethics in technology and to examine their own values towards the role of tech in society.

Patricia Thaine, a PhD candidate at the University of Toronto and CEO of an AI privacy startup, said more tech literacy was needed, starting in high school and even elementary school.

 “It would be great if we could educate more of the population about this – anybody who is going to be touched by AI or data collection in general,” she said. “That’s more likely to happen than regulatory bodies regulating these companies.”

Hessie Jones, a venture capitalist who advocates for “human-centred AI,” emphasized that AI is simply a technology; any biases or misuse comes from the humans who create and deploy it.

“When we start talking about AI gone wrong, we’ve got to look at ourselves because we’re the ones that have done that,” she said.

Venture capitalists, she added, have a big role to play in fostering the responsible and ethical use of AI.

“The investment community has a lot at stake in this,” she said. “They have to shoulder a lot of responsibility because they’re the ones that are putting the money into these companies, that are incentivizing them to do the things that they do.”

Given that government regulation tends to lag behind advances in technology, Jones said investors and tech companies need to keep themselves accountable by “using technology against itself.” 

“Let’s use certifications through AI that can read AI programs and validate if they’re doing what they’re supposed to be doing,” she said. “Let’s use DPIAs – data privacy impact assessments – to show companies what’s actually happening under the hood; what applications they are using; what storage is being used even through Amazon and Google; where is this data going and how is it being processed.

“That kind of stuff has to happen," she said, "because laws don’t change fast and so we have to keep the tech community accountable for the things that they are doing, and the only ones that can do that are the companies that have the money to influence that behaviour.”

Reza Bosagh Zadeh, founder and CEO at machine-vision startup Matroid and an Adjunct Professor at Stanford University, said one way to influence the ethical use of a technology is to retain control of that technology.

He pointed to the pioneering “deep learning” technology that was developed by British-Canadian Geoffrey Hinton and two grad students at the University of Toronto. They sold their technology to Google in 2012. Since then, deep learning has been at the heart of much of the AI technology developed by all the big-tech players in Silicon Valley.

“The biggest contribution to the field of artificial intelligence is a Canadian invention that was exported,” Zadeh said. “Deep learning came out of the University of Toronto, Geoff Hinton’s lab, and then the capitalist incentive basically exported the whole thing to Silicon Valley. And over there, people feel they have control over it because they do, and it’s sad because it was made here.”

“If you make the technology, you would hope that you could imbue some of your ethics onto it,” he added. “But in this case, even that’s been taken away from Canada.”

As the Critical Tech Talk wrapped up, O’Gorman asked the panel for suggestions on how to empower and incentivize the current generation of young tech students and workers to speak out in support of ethical AI.

“We need to rejig our value systems and think about what we value as a society,” said Lubin. “To me, it is about how we go about rebuilding the things that we’ve broken, and trust is the primary thing that we’ve broken.”