There may be no greater understatement than saying AI applications are broad in scope, complex in their implications, and significant in their potential impact. Dr. Brian McQuinn, an associate professor in the Department of Politics and International Studies at the University of Regina, co-director of the Centre for Artificial Intelligence, Data, and Conflict (CAIDAC), and CEO of CIPHER AI, says artificial intelligence (AI) is both absurd and profound in its application.
“It is absurd in that most of its applications involve producing silly videos with your friends doing silly things that you can send around on Facebook and TikTok,” said McQuinn. “The profound applications are autonomous vehicles that are able to target and destroy people and military assets with precision and no human contact, no human involvement, no human in the loop.”
In our research, 83% of disinformation is passed on by average Canadians. So, we bear the greatest responsibility for the quality of our information ecosystem in Canada — Dr. Brian McQuinn, Associate Professor, Department of Politics and International Studies; Co-Director, CAIDAC, University of Regina, CEO of CIPHER AI
Welcome to Part 2 of a three-part feature series covering the AI Futures conference, held in March and organized by the Dr. John Archer Library at the U of R. While Part 1 explored the literacy needed to understand AI, this instalment turns to the immediate AI-related threats that demand attention.
“We didn’t let commercial entities develop nuclear weapons, but we are letting them develop AI.” U of R researcher Dr. Brian McQuinn and his team are fighting the global surge in disinformation with CIPHER AI, an AI-powered system built to protect Canada’s information ecosystem. Photo Credit: University Communications and Marketing
Fighting disinformation
According to McQuinn, foreign actors are already targeting Canada with disinformation. His research team’s report, “The Enemy of My Enemy,” examined how these operations are designed to polarize Canadians against one another.
Their tactics look like fabricated news stories, manipulated videos, and networks of fake accounts that flood social feeds with content designed to inflame divisions on issues like pipelines, immigration, and federal-provincial tensions. Over time, this erodes trust in institutions and makes ordinary disagreements harder to resolve. And now, those actors are using generative AI to scale up the quality, quantity, and variety of their output.
“You simply can’t deal with that level of volume without yourself having AI systems or automated systems,” McQuinn said.
That is why McQuinn, who also serves as CEO of CIPHER AI, built the system with his team at CAIDAC. Think of it as a research assistant for analysts who track disinformation. It does not flag fake content on its own. It reads thousands of articles a day, organizes what it finds, and hands the results to humans who decide what matters.
The numbers show why that helps. A human analyst can read 30 or 40 articles a day. CIPHER AI can process three to four thousand. A report that once took three or four days now takes 30 to 40 minutes. That is the difference between catching a foreign disinformation campaign while it is still spreading and reading about its damage after the fact.
But institutional tools like CIPHER AI are only one side of the equation. McQuinn's research shows the greatest vulnerability in Canada's information ecosystem is not foreign actors or algorithms. It is us.
"In our research, 83% of disinformation is passed on by average Canadians," McQuinn said. "So, we bear the greatest responsibility for the quality of our information ecosystem in Canada."
When he asks his first-year students where they get most of their news, 50 per cent say TikTok. His advice to them is to pay attention to what you want to believe.
“If it’s something that you want to be true, and a story you want to be true, ‘Aha! See, I’ve always known this,’ those are the stories you should really be careful of.”
While CIPHER AI fights disinformation at the institutional level, the same crisis plays out daily at the personal level—and the people least equipped to handle it are children.
Deepfakes, children, and the loss of trust
Seeing is no longer believing. After being a target of identity theft for over a decade, U of R ed-tech professor Dr. Alec Couros warns that with hyper-realistic AI deepfakes flooding our feeds, we can no longer rely on tech companies to protect our digital reality. Photo Credit: University Communications and Marketing
Halfway through a Zoom conversation about artificial intelligence and education, Dr. Alec Couros, a professor of educational technology and media in the U of R’s Faculty of Education, stopped speaking and, as part of a live demonstration, digitally superimposed someone else’s face filter onto his own. It took about three seconds for the person on the other end to realize what was happening. The AI-generated face was convincing. The voice was still his. And the person on the other end of the call had no idea until Couros told him.
“You could be having a conversation with someone else entirely and not even know it,” said Couros.
Couros has been a target of identity theft since 2007. Thousands of women have been defrauded by scammers using his photos. Now, he says, scammers no longer need real photos at all. Anyone can create a virtual influencer from scratch with a single click.
During his conversation for this series, Couros walked through the numbers that support his concern: a UNICEF paper found that 1.2 million children across 11 countries reported having their images manipulated into deepfakes. Furthermore, a New York Times report found that more than 40 per cent of YouTube Shorts recommended to children are AI-generated, aligning with a Europol estimate that 90 per cent of all online content may be synthetically generated by the end of 2026.
Poisoned data: the hidden threat to public policy
Deepfakes fool people, but poisoned data fools machines. U of R public policy expert Dr. Justin Longo warns that the illusion of AI objectivity is a hidden threat to our democratic institutions and civic decision-making. Photo Credit: Dr. Justin Longo
The threat is not only what people see. It is also what machines read.
Dr. Justin Longo, an associate professor at the U of R’s Johnson Shoyama Graduate School of Public Policy, told the story of a journalist who wrote a single fictitious blog post claiming to be the world’s best at eating hot dogs. A week later, generative AI tools cited it as an indisputable fact.
“We might call that poisoning the data set,” Longo said. “Advocacy organizations are going to get good at this.”
If an advocacy group wants to shape how AI-assisted policy analysis frames an issue, Longo said, they just need to write 10 blog posts and distribute them across different platforms. A policy analyst using generative AI to draft a briefing note could draw from that poisoned information without ever knowing it was planted.
“It has the illusion, because it came from a computer, of being objective,” Longo said. “But these systems will only be as objective as the information that they’re drawing from.”
Predicting outbreaks: using AI to protect public health
While AI poses unprecedented threats, it also offers unprecedented protection. U of R data scientist Dr. Affan Shoukat uses artificial intelligence to track global disease outbreaks before they strike, buying public health officials the one thing they need most: time. Photo Credit: University Communications and Marketing
It would be easy to conclude that AI is only making things worse, but that is only half the story. Dr. Affan Shoukat, an assistant professor of data science in the Department of Mathematics and Statistics at the U of R, focuses his research on computational epidemiology—studying what happens when the same capacity to monitor massive volumes of data is pointed at a different problem: disease outbreaks.
Shoukat pointed to the early days of the global pandemic as a prime example, noting how early-warning AI models successfully flagged the emergence of COVID-19 in late December 2019—more than a week before the World Health Organization issued its official warning.
“AI’s greatest value in public health is that it buys us time,” Shoukat said. “It doesn’t replace traditional public health expertise. Instead, it monitors the ‘background noise’ that is indicative of an emerging threat, after which public health officials can step in.”
The tools to fight back
These four researchers, from political science, education, public policy, and public health, are all connected to the U of R. Each is working on a different facet of the same problem: the information that shapes our decisions is under threat, and the tools to fight back are already being built right here.
Coming up next, Part 3 of this series asks the question: how do communities build forward with agency, identity, and a long view of what technology should serve?
Explore programs in political science, public policy, education, and the sciences at the U of R.
Banner image: AI is rewiring our reality, but who controls the network? Part 2 of the U of R’s AI Futures series confronts the immediate threats of deepfakes, poisoned data, and global disinformation—revealing why the ultimate defense isn't better technology. It's us. Photo Credit: Adobe Stock Image
About the University of Regina
At the University of Regina, we believe the best way to learn is through access to world-class professors, research, and experiential learning. We are committed to the health and well-being of our more than 16,600 students and support a dynamic research community focused on evidence-based solutions to today’s most pressing challenges. Located on Treaties 4 and 6—the territories of the nêhiyawak, Anihšināpēk, Dakota, Lakota, and Nakoda peoples, and the homeland of the Michif/Métis nation —we honour our ongoing relationships with Indigenous communities and remain committed to the path of reconciliation. Our vibrant alumni community is more than 95,000 strong and enriching communities in Saskatchewan and around the globe.
Let’s go far, together.