After spending the last 20+ years in cyber security, cyber security helped to scales companies, I found that attacking methods develop creatively. But Kevin Mandia’s prediction about AI-powered cybertex in one year is not just ahead, data shows that we are already there.
Numbers don’t lie
Last week, Caspersky released 2024 figures: globally more than 3 billion mal lover attacks, defenders are looking for an average of 467,000 malicious files per day. Trojan investigations jumped 33% in the year-by-year, mobile financial threats doubled, and here are kickers, 45% of passwords can be cracked within a minute.
But the volume is not the whole story. The nature of the threats is basically migrated because AI becomes a weapon.
It’s already happening. Here is a proof
Micros .fat and Open confirmed that many of us suspect-national-state actors are already using AI for Siberatex. We are talking about big players: Russia’s fancy bear using LLMS for intelligence gathering on satellite communications and radar technologies. Chinese groups, such as charcoal typhoons, produce social engineering materials in multiple languages and perform lateral activities. Iran’s Crimson Sandstorm Crafting Fishing Emails, while North Korea’s sapphire slit research weaknesses and nuclear program specialists.
What is more? Caspersky researchers are now looking for malicious AI models hosted on public reserves. Cyber Criminals AI is using fishing materials, developing Mal Lare and Deepfaq -based social engineering attacks. Researchers are looking at the use of unauthorized employees of LLM-original weaknesses, AI supply chain attacks and researchers “Shadow AI”-AI tools that leak sensitive data.
But this is just the beginning
What we are now seeing helps translate AI ATTACers scale operations and malicious codes into new languages and architectures in which they were not previously proficient. If a nation-state has developed the use of the correct novel, we will not find it until it is too late.
We are pursuing the atmosphere for the purpose of built autonomous cyber weapons. This is not your typical script kiddie attack, we are talking about AI agents that can spy without any human-in-the-loop, identify weaknesses and carry out attacks.
The challenge only goes beyond rapid attacks. These autonomous systems do not reliably distinguish between legal infrastructure and civil goals, called security researchers “theory of discrimination”. When one targets an AI weapon power grid, it cannot tell the difference between military communication and a lateral hospital.
Now we need a global rule
This nuclear weapon calls for regime and global agreements. Right now, no international structure operating AI weapon is essentially. We already have three layers of autonomous weapon systems in development: systems observed with human supervision, semi-autonomous systems that attach pre-selected goals, and full autonomous systems that independently select and include targets.
The scary part? Many of these systems can be hijacked. There is no such thing as an autonomous system that cannot be hacked, and the risk of controlling non-state actors by anti-attacks is real.
Firing with fire
There are several cyber security companies to create new ways to protect such attacks. Take AI SOC analysts from companies like Dropzone AI, which today enables teams to achieve 100% alert investigations by removing the huge gap in security operations. Or companies like Natoma, who are making solutions to identify, monitor, secure and operate AI agents in the enterprise.
The key to fighting fire with tea, or in this case, with AI, AI.
The next pay-Generation SOC (security operations centers) that combines AI automation with human skills is required to protect the current and future status of cyber-attack. These systems can analyze the patterns of attack on machine speed, automatically make threats in multiple vectors, and respond to events faster than any human team can manage. They are not changing human analysts – they are increasing them with the capabilities they need.
Could not be much
What makes this different from previous cyber evolution is the possibility of mass casualties. Autonomous cyber weapons targeted by critical infrastructure, hospitals, power grids and transport systems cause physical damage on an unprecedented basis. We are no longer just talking about data breaking; We are talking about AI systems that can literally endanger life.
Preparation window is closing rapidly. Mendia’s one-year timeline seems optimistic when you consider that criminal organizations are already experimenting with AI-enhanced attack equipment using less controlled AI models, not the safety-centered people of OpenAI or Anthropic.
The bottom line
Increasing security teams with AI agents is not just the future, it’s now. AI will not replace our country’s defenders; It will have rescue organizations and their 24/7 partners in our great nation. These systems can monitor twenty -four hours of threats, process the intelligence of the threat, and respond to attacks in milliseconds.
But the model of this partnership works only if we start building it now. Every day we delay the rivals gives more time to develop autonomous invasive abilities while our defense remains largely human -based.
The question is not whether the AI-powered cyber-attack will come, whether we will prepare AI-powered defense or not. The race is on, and honestly, we are already behind.
Douglas Gota helped Ka SC for more than 20 years, including Kaspersky, PW’s Express and Sophos, and startups as a cyber security investor. Today, DG Cubic is the founder and chief AI officer of the Cubic Squared, an agency AI Growth Marketing Agency that helps cyber security and defense companies to dominate the rapid growth and category through AI -operated strategy.