How Google Is Transforming AI for Cyber Defence

Cybersecurity-updates-from-google-a-summer-of-security
Picture of Meeval Kuriakose

Meeval Kuriakose

Cyber Security Advisor

Google’s latest updates represent more than routine changes. They mark a shift in cyber defence, where AI moves from a support role to actively hunting and stopping threats as they happen. This change challenges how security teams think about protecting systems and responding to attacks.

At iConnect, where we manage security operations across a diverse range of industries, the focus is always on one key question: what will genuinely strengthen our ability to respond faster and stay ahead, not just stay afloat?

The Rise of Autonomous Defence

Google’s autonomous agent, Big Sleep, identified a critical zero-day vulnerability in SQLite, one that was already active in the wild. Attackers had begun to take note. Security teams had not. For me, the significance wasn’t just that the threat was spotted. It was that the system found it without waiting for instruction.

Big Sleep didn’t solely rely on pre-fed signatures or being explicitly told where to look. It moved with purpose, proactively scanning and analyzing, often informed by broader threat intelligence, to discover and flag vulnerabilities. It did so before the window for action closed. That’s the part I find most important. We have reached a stage where an AI system can perform autonomous threat hunting at scale, across complex codebases, and deliver actionable alerts ahead of time.

Google is now applying this model across open-source software projects. That’s not a marketing decision. It’s a security necessity. Because most enterprise systems run on open-source components. And if we wait to fix those only after breaches happen, we’ve already lost the advantage.

Incident Response Needs Less Guesswork, More Context

The update to Timesketch was, in my view, long overdue. Most of us who work on incident response know the time wasted trying to piece together forensic timelines from noisy logs. The upgraded Timesketch, now powered by Google’s Sec-Gemini model, doesn’t just reduce manual work. It helps the analyst think clearly.

It’s not about automation for the sake of efficiency. It’s about guidance. Timesketch now applies reasoning to sort through log data, point to likely root causes, and reduce the delay between discovery and resolution. That’s valuable when you’re managing multiple incidents and need to prioritise the one that poses real risk.

And then there’s FACADE. Most people outside the field might not realise how rare it is to catch insider threats before damage is done. FACADE, which has been running internally at Google since 2018, flags anomalies based on context, not history. That means it can identify behaviour that looks unusual in real-time, even if it doesn’t match past attack patternS.

The architecture here is what stands out. These aren’t standalone tools. They’re being embedded within the broader security stack. That’s exactly how we’ve been thinking at iConnect. Security systems today must be built with intelligent agents integrated at every layer. Otherwise, they become reactive tools bolted onto outdated processes.

Working With AI, Not Around It

There is a moment happening in cybersecurity where people like me, who have been in this field for years, have to re-evaluate what it means to lead. Google’s plans for DEF CON 33 show this clearly. Teams of humans will work with AI agents to solve real-world problems. This is not academic. It’s practical preparation for what security teams will be doing every day.

At iConnect, we have already begun training programmes where our analysts work with AI models during live investigations. We’ve seen that AI systems don’t replace the analyst. But they do reduce the cognitive overload, highlight critical paths, and offer better clarity under pressure. The DARPA AI Cyber Challenge is showing us what happens when entire software ecosystems are defended by autonomous systems. This is not distant speculation. This is next quarter’s environment.

Security leadership now means building human-AI partnerships that function under real pressure. That means knowing when to trust the system’s output, and when to ask questions. And most importantly, it means ensuring our teams understand how these tools make decisions, so that we can stand by those outcomes when it matters.

Trust Is Not Optional

Every tool we deploy has to pass a threshold: do we understand what it’s doing, and can we explain its behaviour to others? Google’s work through the Coalition for Secure AI, including sharing its internal Secure AI Framework, is one of the more meaningful steps I’ve seen towards clarity in this space.

Security is no longer just about guarding data. It’s about understanding the systems doing the guarding. At iConnect, we insist on transparency when we evaluate AI models for security use. We ask how decisions are made, whether output can be validated, and what controls exist if something goes wrong.

Google’s whitepaper on secure-by-design AI makes this point well. Accountability must be built in. If AI agents are acting in enterprise environments, then those agents must be explainable and constrained. That’s not just a compliance requirement. It’s operational hygiene.

Why This Matters for Security Operations

Google’s latest updates were not just a showcase of emerging technologies. They served as a practical signal. The speed, scale, and sophistication of today’s threats are exceeding the capabilities of traditional security models. Rule-based detection, static playbooks, and manual investigation methods are falling behind in environments where every second counts.

AI is not a silver bullet. It will not solve every issue. But it has already demonstrated its ability to detect threats that would otherwise go unnoticed. Systems like Big Sleep and tools like Timesketch are showing that autonomous security agents can operate effectively, provide early warnings, and reduce the response time during critical incidents.

This brings the focus back to operational readiness. Are enterprise security teams redesigning their architecture to work with intelligent systems? Are the tools being adopted able to interpret context, act independently when needed, and offer explainable outputs? Just as important, are teams being trained to validate, manage, and govern these AI-driven systems responsibly?

Our emphasis at iConnect remains on outcomes. Every new tool, model, or technique is evaluated through one lens: does it enhance our ability to respond faster, operate with clarity, and maintain trust across the entire security lifecycle? Innovation alone is not the measure. Strength, speed, and accountability are.

Contact us

Partner with Us for Cutting-Edge IT Solutions

We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.

Our Value Proposition
What happens next?
1

We’ll arrange a call at your convenience.

2

We do a discovery and consulting meeting 

3

We’ll prepare a detailed proposal tailored to your requirements.

Schedule a Free Consultation