Artificial intelligence is reshaping the cybersecurity landscape, delivering powerful defensive tools while paving the way for increasingly sophisticated cyberattacks, warns Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 Africa.
Artificial intelligence (AI) is transforming daily life, but it’s also redefining the realms of cybersecurity and cybercrime. As businesses harness AI to bolster their defenses, cybercriminals are leveraging the same technology to refine and automate their attacks.
“The broader the scope of AI through integrations and automation, the greater the potential risk of it turning malicious,” says Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 Africa.
AI agents—autonomous systems capable of executing complex tasks with minimal human input—are revolutionising cyberattacks. “These advancements don’t just enhance cybercriminals’ tactics; they could fundamentally alter the cybersecurity battlefield,” Collard explains.
Unlike basic chatbots, these agents can plan, execute, and adapt strategies in real time. Their capabilities enable cybercriminals to scale up the scope and efficiency of their attacks, posing an unprecedented challenge to defenders.
Cybercriminals are exploiting AI in multiple ways: perfecting phishing attacks, enabling identity theft through deepfakes, and orchestrating cognitive manipulation campaigns.
With generative AI models, phishing emails are now more convincing, tailored to victims, and free of grammatical errors, allowing scams to adapt automatically to targets’ online behavior.
Deepfakes and synthetic audio, meanwhile, make it easier to impersonate corporate leaders, leading to major frauds—like the 2024 Arup case, where an employee was deceived into transferring $25 million.
AI has also become a formidable tool for cognitive manipulation, generating sophisticated disinformation campaigns that sway public opinion and threaten democratic stability.
A necessary response: AI in cybersecurity
In response, cybersecurity experts are developing AI-driven tools to detect and thwart these threats. Real-time threat detection systems analyse suspicious network behaviour and flag anomalies, alerting security teams promptly.
AI-powered platforms can process vast datasets to identify subtle attack patterns. Efforts to combat deepfakes rely on biometric technologies to spot falsified audio and video, though real-time detection remains a challenge as the technology evolves rapidly.
User training is another critical pillar: AI-based attack simulators help employees recognise phishing attempts and digital manipulations, bolstering corporate defenses against these emerging risks.
While AI offers opportunities to enhance cybersecurity, it also introduces new challenges. Poor integration or a lack of ethical governance could leave these tools vulnerable to malicious exploitation. “The more AI tools we deploy, the more crucial it becomes to enforce rigorous oversight and robust governance mechanisms,” Collard cautions.
In this race between cybercriminals and security experts, businesses must not only adopt AI for protection but also anticipate the next wave of AI-driven threats.
ARD/te/lb/as/APA