Securing the Future: Key Insights on AI’s Dual Role in Cybersecurity

Connection

We recently hosted a webinar with Jamal Khan, Chief Growth and Innovation Officer at Connection and Head of the CNXN Helix™ Center for Applied AI and Robotics—and guest speaker Allie Mellen, Principal Analyst at Forrester—to explore one of the most pressing questions in cybersecurity today: How is artificial intelligence reshaping both our defenses and the threats we face?

Both speakers bring deep expertise in the intersection of security operations, AI innovation, and risk management. Jamal leads Connection’s efforts to apply AI and robotics responsibly across industries, while Allie has spent years researching how AI and automation are transforming the Security Operations Center (SOC) and the broader cybersecurity ecosystem.

In recognition of National Cybersecurity Awareness Month, the Microsoft-sponsored webinar cut through the hype to reveal where AI is delivering real value—and where it still demands caution.

Separating Hype from Reality: Understanding AI’s Limitations

One of the first myths our speakers addressed was the idea that AI will replace the Security Operations Center (SOC).

Jamal noted that early predictions about fully automated SOCs “hit their reality factor.” He said that while automation and AI are valuable tools, the notion of replacing the SOC is misguided. “It still requires a significant amount of human involvement,” he explained.

Allie agreed, describing how her perspective has evolved. “To be honest, I think the past three years or so have been kind of a disappointment,” she said. “You can use generative AI for things like researching threat actors—but nothing really compelling. That’s changing now, particularly because of AI agents.”

Still, she cautioned that current capabilities don’t justify removing humans from the loop. “Sometimes AI is wrong, and we need a human to really understand what’s going on and make an informed decision.”

The conversation also touched on another misconception that AI can effectively train junior analysts. Allie pointed to a recent study showing that “AI chatbots gave a wrong answer to more than 60% of queries.” She argued that using these tools to upskill staff “is a crazy idea… especially for someone deeply concerned with risk and risk management.”

Where AI Actually Delivers Value

Both speakers agreed that while AI won’t replace the SOC, it can make it far more efficient.

Jamal emphasized AI’s potential to “reduce toil.” This refers to the repetitive, manual tasks that weigh down security teams. He described how generative AI can “help us inform and build better decisions” by summarizing logs, generating case notes, and enriching alerts.

Allie added that she’s now seeing “AI agents that are purpose-built to do things like triage or initial investigation of incidents.” Those functions, she explained, “take up so much time in an analyst’s day, and to be seeing AI be used, and used effectively, for those functions… it’s actually really starting to get exciting.”

She categorized current AI applications in security into three areas:

  1. Content creation, such as report writing and script evaluation
  2. Knowledge articulation, including chatbots for research and intelligence queries
  3. Behavior modeling, where AI creates playbooks, generates parsers, and assists with investigation and triage.

“This,” she said, “is where the real value add is going to be.”

The Double-edged Sword: AI Empowers Attackers Too

The discussion also explored how AI is changing the threat landscape. Jamal noted that while AI supports defenders, it also gives attackers “scale management” and the ability to create highly personalized phishing, deepfakes, and automated reconnaissance.

Allie explained how AI lowers the technical barriers that once limited attackers. “Being able to operate among different types of infrastructure becomes very important,” she said. “That’s changing significantly because of AI.”

She referenced recent research showing that threat actors are already using AI to support ransomware-as-a-service operations, romance scams, and reconnaissance. “There are a lot of really effective things that attackers can use AI for to aid their efforts,” she said, adding that nation-state attackers will likely adopt agentic AI systems first, with cybercriminals following.

Jamal raised an interesting question about whether both attackers and defenders could eventually “over-trust AI.” If adversaries rely too heavily on automated systems, he suggested, “they become noisy and thereby less effective.”

Evaluating AI Security Products: Beyond the Marketing

When asked how to separate substance from marketing, Allie shared Forrester’s framework for evaluating AI in security products: trust, utility, and cost.

Trust: The biggest red flag is vendors that rely on user thumbs-up/thumbs-down feedback, essentially crowdsourcing quality assurance. The only truly reliable method at scale is expert validation, with dedicated teams evaluating outputs before actions are executed. We’re moving from deterministic software to non-deterministic systems, requiring new testing paradigms and continuous validation.

Utility: Is this feature genuinely useful for your team’s specific workflows, or is it a checkbox item that sounds impressive in vendor presentations?

Cost: Pricing models for AI features remain wildly inconsistent across vendors. Understanding total cost of ownership—including API calls, compute resources, and data processing—is essential before committing.

Organizations also need metrics to validate AI effectiveness. Are mean time to detect and respond (MTDR) metrics improving? Is alert volume decreasing while detection accuracy increases? These change management metrics provide concrete evidence of value.

The New Attack Surface: Securing AI Itself

Allie and Jamal also discussed the security challenges introduced by deploying AI.

Allie outlined three main areas of risk: users, applications, and models. Users face issues like prompt injection and data leakage; applications are vulnerable through vector databases and enterprise data; and models are exposed to inference attacks, tampering, and data poisoning.

She introduced Forrester’s new AEGIS framework (Agentic AI Enterprise Guardrails for Information Security), which focuses on:

  • Least agency, extending Zero Trust to limit what AI agents can do
  • Continuous risk management, replacing one-time assessments with ongoing monitoring; and
  • Explainable outcomes, so teams understand how and why AI systems make decisions.

Jamal noted that many existing risk frameworks struggle to keep pace with AI’s rapid evolution. “A lot of these frameworks, in their static nature, are always two steps behind,” he said.

The CISO Action Plan: A Phased Approach

To conclude the webinar, Jamal offered a pragmatic roadmap for CISOs.

First 30 Days: Conduct comprehensive AI inventory (organizations are often surprised by shadow AI already operating), establish clear policy guidelines, and launch high-value, low-risk pilots.

Next 30 Days: Build a model registry tracking approved models and their lineage, create prompt repositories, implement DLP controls on AI prompts (especially for PII), define evaluation processes, and develop incident response runbooks for AI failures.

Following 30 Days: Integrate AI with SOAR strategy, establish meaningful guardrails beyond basic controls, implement red teaming programs, and continuously improve based on real-world usage.

Allie added that CISOs must act as translators for their organizations. “There’s an expectation from other teams that you’re going to see significant value-add from AI very quickly,” she said. “Security has to explain why that’s not always realistic.” She recommended rolling out AI tools first to senior staff “who have the most understanding of your processes,” before extending access to less experienced analysts.

Final Takeaway

This discussion made one thing clear: AI won’t eliminate security expertise. Rather, it will make it indispensable. As the race between AI-wielding attackers and defenders intensifies, the path to success is strategic: adopt AI thoughtfully, maintain healthy skepticism, and keep humans firmly in control. This allows defenders to harness AI’s power for acceleration, not autonomy.

Watch the full webinar to hear the complete discussion on guardrails, real-world applications, and the future of agentic systems.

As a Microsoft Solutions Partner for Security—with all four advanced security specializations—Connection helps defend identities, data, and infrastructure from evolving cyberthreats while boosting operational efficiency. Learn how our Microsoft security experts work alongside your team to strengthen your security posture. To view our cybersecurity services, visit www.connection.com/cybersecurity.

© PC CONNECTION, INC. ALL RIGHTS RESERVED.