Geopolitics and Gen AI: Is a Cybersecurity Storm Brewing?

BMO

Between the rise of artificial intelligence (AI) and geopolitical tensions, could a cybersecurity storm be brewing? The more business leaders can understand the forces driving an increasingly sophisticated cyber environment, the better that companies can protect themselves. These were just some of the topics discussed at the “Weathering the Looming Cyber Storm” session I participated in at the 18th annual Toronto Global Forum organized by the International Economic Forum of the Americas (IEFA).


The panel, which touched on pressing cybersecurity issues facing companies and governments, included:


  • checkmark icon

    Mark Fernandes, Global Chief Information Security Officer at CAE

  • checkmark icon

    Gillian Kerr, Partner, Litigation, McCarthy Tétrault

  • checkmark icon

    April Fong, Executive Editor at The Logic (Moderator)


A growing risk to consider


The greatest risk comes from nation-states, where military groups and intelligence organizations work alongside cybercriminals to deploy malicious software for political and financial gains. Ports, water facilities, energy companies and even internet service providers could be at risk.


Cybersecurity will likely become a more integral geopolitical tool. In warfare, adversaries could use cyber to impact a conflict, and, indeed, we’ve already heard rumblings of nations disabling government sites and even nuclear facilities.


AI’s soon-to-be-felt impact


Generative AI, which is a type of AI that creates new content by learning from existing data patterns, will complicate the cybersecurity sector and how businesses protect themselves. Mark Fernandes from CAE noted that one of the key threats is employees putting company information into external AI programs and bad actors somehow getting that data to use for nefarious purposes. Even with enterprise versions of these solutions, someone could, quite easily, accidentally access the public version, he said. CAE has embarked on a training initiative to ensure its people are using generative AI properly – until then, it’s restricting external usage of AI sites.


A larger threat looms, however – what Fernandes calls the “weaponization of AI.” This is when bad actors use generative AI to mimic real people, such as creating deepfake videos or replicating people’s voices. While some of this is happening today, it’s not yet occurring in a significant way, he said. That could change over the next few years as this technology evolves.


Right now, threat actors prefer to use the path of least resistance to commit crimes, and AI is still too complicated for them. That will likely change, though


To that end, we’re finding that AI can be an effective tool to prevent fraud. For instance, in one situation, a person called our call center saying she was a new customer and the ATM she was using was not working. An AI model we created creates a risk score for calls between zero and 100, where 100 is fraud. In this case, the model suspected fraud, so we told this person to find her nearest branch. A short time later, the AI model detected the voice of the same caller, except this time claiming their cheques were locked up. We may not have caught this person without the AI.


Prepare for a potential attack


However, now is the time to prepare your business for a potential attack. As we say in the cybersecurity world, it’s not a matter of if an attack will occur, but when. Gillian Kerr from McCarthy Tétrault’s Litigation Group pointed out that she is often brought in after an issue arises. Instead, she advised that companies should talk to lawyers as they create a security plan. She also suggested that companies should identify the internal and external providers potentially needed to deal with a cybersecurity issue, including public relations, technical experts and others. Incidents can be messy and move quickly, she said, so put retainers with professionals in place now.


From my perspective, a lot of companies focus on developing strategies around how the technical side of a breach should be handled. While that is important, a quality team will help manage that for you. Based on the many incidents I’ve seen, one of the biggest risks from a breach is reputational – what could happen today from an attack, but also, what does a hack mean to your business in a month or even a year from now? What sort of counterparty risk does your company have? How might employees react to a breach?


Communication is also critical, and I’ve found it’s the hardest part. Executives must have a plan for what they’re going to tell their staff about a breach and how it happened. Many employees may wonder if they still have a job or if they should come to work the next day. What if they don’t have remote access anymore? I also tell CEOs to be prepared to call their top 10 customers after a breach. And, you have to consider ahead of time how you’re going to communicate with regulators and the news media. You have to anticipate all of this before something happens – think ahead and practice.



Larry Zelvin