Just weeks ago, Claude 4.6 worked with the Firefox team to uncover 22 real vulnerabilities in production software. That was notable. What came next was more so.
Anthropic reportedly held back their next model, Claude Mythos, not because it underperformed, but because it was finding thousands of previously undiscovered vulnerabilities across operating systems and web browsers, and that gave them pause. Whether the full story is as dramatic as reported, a model capable of doing this is coming soon, if it is not already here.
AI is no longer just helping people write code. It is helping people break it. What once took weeks of specialist effort can increasingly be done in hours.
AI Is Changing Both Sides of Security
At the same time, we are seeing an explosion in AI-assisted development. “Vibe coding”, rapid prototyping, and LLM-generated applications. The barrier to building software has dropped significantly.
That is a net positive. More people can solve problems with software. But it also means more code is being shipped with less scrutiny, less experience, and less time spent thinking through edge cases.
And that creates a tension.
Looks Can Be Deceiving
Modern software can feel safer than it is. Interfaces are polished. Flows are smooth. Branding looks professional. None of that tells you anything about what is happening underneath. Judging software security by the UI is like judging a building’s structural integrity by how nice the lobby looks.
AI didn’t just raise the ceiling for attackers. It lowered the cost of trying.
Why This Matters for Organisations
The economics of security are changing.
Vulnerabilities will be found faster. Attackers need less skill to experiment. The volume of software being produced keeps rising. That means security cannot be something checked just before release. It has to be part of how software is designed, built, reviewed, and operated.
How Biz Hub Builds for This Reality
This is something we have been investing in for years. Not because it is fashionable, but because the software risk profile keeps rising.
We use automated static analysis to continuously inspect code for quality and security issues. In the current environment, waiting for a human to notice a problem is too slow; automated tooling catches issues before they travel further down the pipeline.
We combine that with peer review, where all meaningful changes are examined by experienced developers who understand the broader context. No tool replaces the judgement of someone who knows the system and can ask whether something feels right.
We are also increasing our use of AI-assisted review to help identify risky patterns, missed edge cases, and issues humans may overlook under pressure. AI can review tirelessly. Humans provide judgement. Together they cover more ground than either does alone.
We also favour a stable, proven enterprise software foundation that we know deeply and continuously improve. Given how fast the threat landscape is moving, building on whatever framework is popular this year introduces risk that compounds over time. Consistency and deep familiarity give our clients safer upgrade paths and fewer surprises.
Finally, we make use of periodic penetration testing and external review to challenge assumptions and test systems the way attackers would.
The Real Question Now
AI hasn’t just improved how we build software. It has changed the economics of security. Finding vulnerabilities is cheaper and easier than ever.
The question is no longer whether vulnerabilities exist. It is how quickly you can detect them, respond to them, and limit the damage when they do.
If you are modernising a legacy platform or building new software, the security expectations your users and stakeholders bring have shifted. Get in touch to talk about how we can help you build systems that are maintainable, scalable, and secure by design.