Monday, October 13, 2025
No menu items!
Solve Education Annual Report 2024
Hi,
HomeInnovationDigital Leadership 2025This CEO Warns: GenAI is Killing Cyber Talent Before It Can Grow

This CEO Warns: GenAI is Killing Cyber Talent Before It Can Grow

“Attackers don’t need 99.9999% reliability — but defenders do.” — Ethan Seow

GenAI has quickly become the most discussed innovation in enterprise technology, but its integration into cybersecurity is already proving complex and uneven. In a keynote session at the Digital Leadership Webinar Summit 2025, Ethan Seow, CEO of Practical Cyber and Cyber Intel Training, argued that for every powerful use case of GenAI, there is an equal or greater threat being exploited by attackers.

Ethan, who was named APAC Rising Star in Cybersecurity by ISC2 in 2023, brings a distinctive interdisciplinary lens to the topic. Trained in psychiatry before transitioning into cybersecurity education and entrepreneurship, he has worked with hundreds of trainees and global experts across Singapore, Malaysia, and the United States.

His talk pulled no punches: organizations are not only unprepared for GenAI-enabled threats, they are often deploying these tools without understanding the risks.

A Perfect Tool for Attackers, a Fragile Tool for Defenders

Ethan began by reframing how generative AI is being used in real-world attacks. With open-source models like LLaMA and DeepSeek now easily downloadable and modifiable, threat actors are rapidly integrating GenAI into malware creation and social engineering attacks. One alarming trend: malware that calls out to ChatGPT or Claude mid-attack to generate new iterations of malicious code.

Social engineering has also evolved. No longer riddled with grammar errors or generic messaging, GenAI-enabled phishing emails are now contextually rich, tailored to targets, and far harder to detect. According to Perception Point’s 2024 Annual Report, social engineering attacks have increased by 1,760% since 2022 — a stat Ethan highlighted to show how fast adversaries are moving.

Social engineering attacks rose 1,760% since 2022 as GenAI capabilities spread

“It’s easier to be an attacker than a defender in this environment,” Ethan explained. “Attackers can just try, fail, and move on. Defenders don’t have that luxury.”

The Great Cybersecurity Talent Shakeup

One unexpected consequence of GenAI’s rise? Cybersecurity job losses. Major firms like Microsoft and others have laid off substantial parts of their cyber teams, banking on GenAI to replace certain functions. But Ethan argued that this is shortsighted.

“GenAI is replacing entry-level roles before we’ve solved the cybersecurity talent gap,” he said. “This means fewer people are building foundational experience, and our pipeline of future experts is thinning.”

What makes this more dangerous is the fact that many cybersecurity professionals are still unfamiliar with how LLMs work. GenAI literacy is low even among technical teams, and fear of replacement prevents upskilling. Ethan urged organizations to view GenAI not as a shortcut but as a capability that requires deep re-education.

GenAI Integration: Immature and Often Insecure

Despite its promise, Ethan cautioned that most enterprises are still not ready for large-scale GenAI adoption. Referencing a RAND Corporation study, he outlined five reasons why AI projects fail: misunderstood problem definitions, lack of quality data, weak infrastructure, tech immaturity, and unrealistic expectations.

He illustrated how poorly implemented GenAI tools — especially in the form of chatbots — can be vulnerable to prompt injections, leading to data leaks and unauthorized access.

Ethan also pointed to a Gartner report urging companies to stop treating GenAI as a magic pill. Without the right use cases and safeguards, integration can do more harm than good.

Most GenAI projects fail due to immaturity in data, infrastructure, and expectations

Software Acceleration vs. Security Maturity

One of GenAI’s most valuable applications, according to Ethan, is in software engineering. Google revealed in late 2024 that over 25% of its codebase is now AI-generated. In the hands of skilled developers, work that once took six months can now be completed in one day.

But Ethan warned that this speed often comes at the expense of security. Rapid development cycles using GenAI often bypass threat modeling, compliance checks, or architecture reviews. The result: code that ships faster but with more vulnerabilities baked in.

What Leaders Must Do Now

Ethan closed with a three-part call to action for digital leaders:

  1. Understand GenAI — Know how it works, where it fails, and how to audit its outputs.
  2. Develop Critical Thinking — Do not accept GenAI outputs at face value. Validate rigorously.
  3. Don’t Believe the Hype — Recognize where GenAI adds value and where it’s just noise.

He emphasized that GenAI is neither savior nor villain — it is a tool. And like any powerful tool, it needs skilled hands and thoughtful implementation.

“We are in the early internet stage of GenAI,” Ethan said. “The hype curve will settle. What matters now is whether your organization is building lasting capability — or just chasing trends.”


Editor’s Note:

This feature highlights Ethan Seow, CEO of Practical Cyber and co-founder of the Centre for AI Leadership, who delivered the keynote “Cybersecurity in the GenAI Era” at the Digital Leadership Webinar Summit 2025.

Read the Chinese article here.

Hilmi Hanifah
Hilmi Hanifah
Hilmi Hanifah is the editor at New in Asia, where stories meet purpose. With a knack for turning complex ideas into clear, compelling content, Hilmi helps businesses across Asia share their innovations and achievements, and gain the spotlight they deserve on the global stage.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -spot_img

Most Popular

Recent Comments