How AI Bots Are Changing the Game in Cybersecurity
And what you can do about it
In the Cybersecurity landscape things evolve fast, REALLY FAST, and AI has proven to be a significant catalyst for such evolutionary process: As artificial intelligence continues to transform industries, it’s also becoming a powerful tool in the hands of cybercriminals.
AI -powered Computer-Using Agents (CUAs) aren't just getting smarter - they're starting to act more human than ever before. I've been tracking this trend closely, and the latest implementation of it proved that criminals aren’t running out of both options and fantasy to perform their malicious activities.
Let’s dive into it.
The New Kids on the Block: AI Agents That Act Like Humans
Remember when spotting a bot was easy? Those days are over. Today's AI bots don't just run scripts - they navigate websites, fill out forms, and interact with applications just like you and I would. They pause between keystrokes, move the mouse naturally, and make the occasional typo to seem more human.
This isn't science fiction anymore; it's happening now.
These CUAs are redefining how attackers approach identity-based attacks. Unlike the basic bots we've encountered and battled for years, these sophisticated agents can:
Scout your entire digital ecosystem to map out all possible entry points
Crack your login defenses by patiently testing credentials without triggering alerts
Create hidden backdoors once they're in
Find their way to your most valuable data without making noise
What's particularly troubling is that once inside, these agents establish persistence. They don't just grab what they can and run. Instead, they create alternate access methods - setting up API keys, creating shadow admin accounts, or establishing OAuth connections that let them return undetected even if the original breach is discovered.
The Spam Evolution: Meet AkiraBot
While CUAs are targeting authentication systems, tools like AkiraBot are revolutionizing spam attacks. In what feels like a case of life imitating art, AkiraBot shares its name with the legendary Akira Toriyama, creator of Dragon Ball. And just like how Toriyama's Goku kept powering up to new, previously unimaginable levels, this bot keeps evolving its capabilities beyond what security experts thought possible.
The attack has reportedly hit over 420,000 websites, easily bypassing CAPTCHA systems that were designed to stop automated attacks. It rotates IP addresses, mimics human typing patterns, and even simulates the slight hesitations we make when filling out forms. Traditional anti-spam measures simply weren't built for this level of sophistication.
More here: https://cyber-safety.co/cybersecurity-in-the-age-of-ai-how-akirabot-bypasses-defenses-and-floods-the-web-with-spam/
The Real Problem: The Democratization of Hacking
Perhaps the most alarming aspect of these technologies is how they're lowering the barrier to entry. You don't need to be a coding genius to deploy these tools anymore. Many come with user-friendly interfaces that let even technical novices launch sophisticated attacks.
What we're seeing now is the "consumerization" of hacking tools. Just as enterprise software eventually became accessible through user-friendly interfaces, the same is happening with attack tools. CUAs and tools like AkiraBot often feature slick graphical interfaces, step-by-step wizards, and even customer support channels on platforms like Telegram. Some even offer "attack as a service" business models where you pay a monthly subscription fee for access.
Observations from a hacking forum (for research purposes) have shown that certain tools are being promoted with notable features and it’s interesting to see how these tools are being marketed towards users without a technical background.
Some advertisements emphasize ease of use, suggesting that no coding skills are needed, while others claim that their tools can be quickly mastered by anyone.
This trend highlights the increasing accessibility of such tools to a wider range of users.
The economics are troubling too. A few years ago, launching a significant attack campaign might have cost thousands of dollars in infrastructure and required a team of specialists. Today, I've seen subscription packages for AI-powered attack tools priced at less than $100 per month. The return on investment for criminals is staggering – one successful data breach or spam campaign can pay for years of these subscriptions.
This democratization has profound implications for the threat landscape. Instead of a relatively small number of highly skilled attackers, organizations now face a vast army of lower-skilled operators armed with increasingly powerful tools: they might lack training, but the sheer VOLUME and the power of their tools make them dangerous.
The criminal ecosystem is adapting to this new reality too. I've seen evidence of specialization, where different groups focus on specific parts of the attack chain. Some sell access to compromised accounts, others specialize in data exfiltration, and still others focus on monetization. This "crime as a service" model means that even if attackers lack skills in certain areas, they can simply purchase those capabilities from specialists.
What is staggering though, isn’t the sheer number of attacks per se, but the fact they’re getting better every year. The interfaces are becoming more intuitive. The cost is dropping. And the attackers are learning from each success and failure.
We're still in the early stages of this trend, and it's only going to accelerate.
Real-World Protection Strategies
So what can you do to protect yourself and your organization? Here are practical steps I've seen work in the field:
For Individual Users:
Use passkeys instead of passwords whenever possible. Despite their device-bound nature (aka: authentication mechanisms that are tied specifically to a single, trusted device), the private keys remain in the user device and are never shared online, also they're phishing-resistant and much harder for AI agents to steal.
Enable notification alerts for all account logins. I have mine set to text me whenever my accounts are accessed from a new device.
Regularly review your connected applications and OAuth permissions. I do this quarterly and am always surprised the number of apps that have still access to my accounts.
Be suspicious of any form communication that seems generic or asks for sensitive information. I recently received a very convincing "support" message that was generated by an AI spam tool.
For Organizations:
Implement risk-based authentication that continually evaluates user behavior. Using MFA, IAM platforms, and Behavioral Analytics Tools (such monitoring login time, device type, IP addresses, and geographical location detection).
Deploy honeytokens and decoy data. These are fake credentials or data points that, when accessed, trigger alerts. They're incredibly effective against automated systems that don't know what's real and what's a trap.
Conduct regular access reviews across all SaaS applications. Constantly review APIs, implement solid IAM policies and carefully monitor changes in privilege escalation.
Test your forms and contact points with realistic AI-generated content. If your security team can't distinguish between AI-generated and human queries, neither will your staff.
Move beyond simple CAPTCHA to behavioral analysis. Possibly implementing challenges with analysis of mouse movements and typing patterns.
Advanced Defensive Tactics
For those ready to take their defenses to the next level:
Implement device fingerprinting that goes beyond cookies. Carefully monitoring the browser rendering fonts and execution of JavaScript can help identify automated agents for suspected behavior.
Create authentication flows with deliberate inconsistencies that trip up automated systems. For example, occasionally changing the order of form fields or adding unexpected verification steps.
Develop internal threat hunting programs focused specifically on finding CUA behavior. Look for users who never make mistakes, who navigate too efficiently, or who access systems in repetitive patterns.
The Path Forward
The reality is that we're entering a new phase in the cybersecurity arms race. As AI systems continue to advance, the line between human and automated activity will become increasingly blurred. But that doesn't mean we're helpless.
By combining technical controls with human awareness, implementing defense-in-depth strategies, and staying informed about emerging threats, we can adapt to this changing landscape. The organizations that will thrive in this environment aren't necessarily those with the biggest security budgets, but those that approach security as a continuous process of adaptation and learning.
The AI agents are getting better every day - so must we.
Let's stay vigilant, share what we learn, and keep evolving our defenses. After all, there's still one thing even the most advanced AI can't perfectly replicate: the creativity and adaptability of security professionals who are passionate about protecting their organizations.
Remember: in this new world, security isn't just about having the right tools - it's about building a culture of awareness and resilience that can weather whatever comes next.
For further reading and insights:
https://www.sentinelone.com/labs/akirabot-ai-powered-bot-bypasses-captchas-spams-websites-at-scale/
https://cyber-safety.co/cybersecurity-in-the-age-of-ai-how-akirabot-bypasses-defenses-and-floods-the-web-with-spam/
https://thehackernews.com/2025/04/akirabot-targets-420000-sites-with.html









