Human[X]: A Thought-provoking Dive into AI’s Uncharted Future

Jamal Khan

Last week, I attended Human[X], a conference that, at first glance, was an unknown quantity. In an era where nearly every event markets itself as “AI-focused,” picking the right one can be a gamble. However, the lineup of speakers intrigued me, and despite my initial skepticism, I walked away impressed—so much so that Human[X] is now on my annual must-attend list. With a caveat that will come later.

Hosted at the Fontainebleau in Las Vegas, the event struck an ideal balance—large enough to house numerous insightful discussions yet intimate enough to facilitate meaningful interactions. The smaller audience turned out to be an advantage, allowing attendees to fluidly move between sessions, engage in deeper conversations, and extract more value. The challenge for next year? Scaling without losing this magic mix of intimacy, accessibility, and quality discourse. Hence the caveat.

The AI Landscape: Innovation Outpacing Application

One undeniable truth emerged from Human[X]: AI innovation is accelerating at an almost unmanageable speed, but real-world enterprise value is still being defined. The conference underscored a landscape in which AI models, expert systems, assistants, Agentic frameworks, AI-enhanced applications, and embedded intelligence are evolving at a breakneck pace. Yet paradoxically organizations are still searching for the right use cases that balance business impact with feasibility.

This moment feels eerily reminiscent of the early Web era—a time when technology was advancing faster than businesses could absorb it, giving birth to an entirely new ecosystem. The same is happening with AI: while the fundamental building blocks exist, we are in the formative stages of discovering the real economic drivers of AI adoption.

Key AI Themes from Human[X]

Among the many discussions, some themes stood out as particularly critical for the near-term and long-term evolution of AI:

1. Agentic Frameworks: AI as Autonomous Decision-Makers

One of the most exciting (and troubling) developments discussed was the rise of Agentic frameworks—AI systems that not only analyze and recommend but autonomously execute tasks within a defined scope. This marks a fundamental shift from AI as an assistant to AI as an active business participant.

For example, OpenAI’s Auto-GPT and BabyAGI frameworks are early attempts at AI agents capable of independently breaking down complex tasks and iterating towards goals. Research from McKinsey suggests that AI-driven process automation could replace up to 30% of business tasks in key sectors like finance, law, and healthcare by 2030. If refined and broadly adopted, these systems could automate entire business functions, significantly reducing the need for human oversight.

2. Trust Models for AI: The Precursor to Scale

Trust remains a critical barrier to AI adoption. As with cloud computing in its early days, organizations will not deploy AI at scale without confidence in security, bias mitigation, explainability, and regulatory compliance.

At Human[X], speakers repeatedly emphasized the need for trust frameworks—a structured approach to ensuring AI is deployed responsibly. Examples include:

  • Microsoft’s Responsible AI Framework, which integrates transparency and risk assessment into AI deployments.
  • NIST’s AI Risk Management Framework, a U.S. government-led initiative aimed at standardizing AI governance.
  • EU’s AI Act, which seeks to categorize AI applications by risk level, limiting use in high-risk scenarios like biometric surveillance.

Without a widely accepted AI trust model, enterprises will remain hesitant, and regulatory ambiguity will continue to serve as a brake on adoption.

3. AI for Cybersecurity: Automating the “Grunt Work”

Cybersecurity is a domain where AI is already making an impact—albeit in a limited way. Most cybersecurity tools today leverage AI for anomaly detection, log analysis, and threat intelligence, but we are rapidly moving towards autonomous cybersecurity agents capable of defending networks without human intervention. According to Gartner, AI-driven Security Operations Centers (SOCs) are projected to reduce manual cybersecurity workloads by 40% by 2027, thanks to AI’s ability to detect and respond to threats faster than human analysts.

Discussions at Human[X] revolved around AI’s role in:

  • Automating Security Operations Centers (SOCs): AI handling Tier-1 security tasks, reducing false positives, and allowing human analysts to focus on critical threats.
  • Threat Hunting with AI: AI-driven systems proactively seeking vulnerabilities rather than reacting to attacks.
  • Self-Healing Networks: AI autonomously responding to breaches, mitigating attacks before human intervention.

While in the short run AI will not fully replace human expertise in cybersecurity, its ability to automate repetitive tasks and augment human analysts is already proving invaluable.

The Elephant in the Room: AI and Job Displacement

One major frustration I had at Human[X]—and at many AI conferences—is the unwillingness to address the job displacement debate with honesty. Many speakers contorted themselves to emphasize AI’s role in enhancing productivity rather than eliminating jobs. While this is partially true, it fails to acknowledge the inevitable second-order effects.

Short-term: The Rise of the AI-augmented Worker

In the near term, AI will boost individual productivity. Employees will be expected to operate at a higher level, leveraging AI as a force multiplier. This aligns with Atif Rafiq ’s concept of a “higher bar for employee excellence”—where workers must bring more value to remain relevant.

Long-term: AI Will Have a Net Negative Impact on Jobs

However, long-term job growth claims are questionable at best. Studies from the World Economic Forum (WEF) suggest AI will create 97 million new jobs by 2025, but these figures fail to account for the proportionality of job losses versus job creation. Additionally, WEF’s job growth goals due to AI by 2025 seem overly optimistic.

  • McKinsey’s research predicts up to 800 million jobs could be lost to automation by 2030.
  • A 2023 MIT study found that while AI does create new jobs, most require specialized skills that displaced workers do not possess.
  • Goldman Sachs estimates that AI-driven automation could replace 300 million full-time jobs globally.

I remain deeply skeptical about workforce re-skilling initiatives closing this gap. The narrative that AI job displacement will be balanced by job creation lacks empirical validation. Historical evidence of “re-skilling” failures due to the lack of policy and regulatory support, to address industrial automation in the 80’s and 90’s, eventually gave us the rust belt and its social and political impact. Those past experiences should be a clarion call for political and policy makers around the world. It is not just about new jobs emerging, which in itself seems to be a tall order—it is also about whether the displaced workforce can transition into them.

Instead of ignoring this reality, we need real policy discussions about how to manage workforce transitions, develop re-skilling programs, and mitigate economic fallout. Pretending job displacement isn’t happening does not make it any less real.

Final Thoughts: The Road Ahead

Human[X] successfully captured the complexity, diversity, and velocity of AI’s evolution. The major takeaways?

✔️ AI’s trajectory is moving faster than businesses can absorb.

✔️ Agentic AI frameworks are poised to transform business operations.

✔️ Trust models are essential for AI adoption at scale.

✔️ AI’s role in cybersecurity is growing rapidly.

✔️ AI-driven job displacement is real, and we must start discussing it openly.

AI will disrupt almost every industry, and the speed of disruption will surprise everyone. The only question is: Are we prepared for the transformation that is coming?

On to the next conference. NVIDIA GTC.

What are your thoughts? Are we being honest enough about AI’s impact on jobs? Let’s have the conversation that needs to happen.

#AI #ArtificialIntelligence #AIinBusiness #FutureOfWork #AITrust #JobDisplacement #AIRegulation #AIAdoption #AIConferences #HumanX #NvidiaGTC #CNXNHelix #WeSolveIT #WeSolveAI #ConnectionIT[GenerativeAI was used in the creation of this blog post

Jamal Khan holds a prominent leadership role in the fields of artificial intelligence and cybersecurity, serving as the Chief Growth and Innovation Officer at Connection and as the director of the CNXN Helix Center for Applied AI and Robotics. With a twenty-year tenure in various executive and strategic capacities, Mr. Khan is acclaimed for his adeptness in integrating multiple disciplines to spearhead innovative technological solutions. His expertise is primarily focused on the development of artificial intelligence strategies that span generative AI, computer vision, and natural language processing, with a significant emphasis on cybersecurity, compliance, and controls. Mr. Khan’s contributions to innovation are further evidenced by his co-invention of six patents, which center on human-machine interface design, data orchestration, and machine learning applications. In addition to his technical achievements, he is actively involved in the technology startup ecosystem as an investor and mentor. Mr. Khan is also recognized for his educational contributions, periodically lecturing at leading academic institutions and national forums on topics related to AI and cybersecurity. Previously, he served on the SPAC Board at Intel and is currently a member of the MPAB Board at Hewlett Packard Enterprise.

© PC CONNECTION, INC. ALL RIGHTS RESERVED.