In the Agents of Change podcast, host Anthony Witherspoon welcomes Archie Mayani, Chief Product Officer at GHX (Global Healthcare Exchange), to explore the vital role of artificial intelligence (AI) in healthcare.
GHX is a company that may not be visible to the average patient, but it plays a foundational role in ensuring healthcare systems operate efficiently. As Mayani describes it, GHX acts as “an invisible operating layer that helps hospitals get the right product at the right time, and most importantly, at the right cost.”
GHX’s mission is bold and clear: to enable affordable, quality healthcare for all. While the work may seem unglamorous, focused on infrastructure beneath the surface, it is, in Mayani’s words, “mission critical” to the healthcare system.
AI has always been integral to GHX’s operations, even before the term became a buzzword. Mayani points out that the company was one of the early adopters of technologies like Optical Character Recognition (OCR) within healthcare supply chains, long before such tools were formally labeled as AI.
This historical context underlines GHX’s longstanding commitment to innovation.
Now, with the rise of generative AI and agentic systems, the company’s use of AI has evolved significantly. These advancements are being harnessed for:
All of these tools are deployed in service of one goal: to provide value-based outcomes and affordable care to patients, especially where it’s needed most.
GHX builds resilience. That’s the ethos behind their proprietary system, aptly named Resiliency AI. The technology isn’t just about automation or cost-savings; it’s about fortifying healthcare infrastructure so it can adapt and thrive in the face of change.
Mayani articulates this vision succinctly: “We are not just building tech for healthcare… we are building resilience into healthcare.”
Anthony, the podcast host, highlights a key point: AI’s impact in healthcare reaches far beyond business efficiency. It touches lives during their most vulnerable moments.
The episode highlights a refreshing narrative about AI: one not focused on threats or ethical concerns, but rather on how AI can be an instrument of positive, human-centered change.
One of the core themes explored in this episode of Agents of Change is the pressing importance of responsible AI; a topic gaining traction across industries, but particularly crucial in healthcare. Host Anthony sets the stage by highlighting how ethics and responsibility are non-negotiable in sectors where human lives are at stake.
Archie Mayani agrees wholeheartedly, emphasizing that in healthcare, the stakes for AI development are dramatically different compared to other industries. “If you’re building a dating app, a hallucination is a funny story,” Mayani quips. “But in [healthcare], it’s a lawsuit; or worse, a life lost.” His candid contrast underscores the life-critical nature of responsible AI design in the medical field.
For GHX, building responsible AI begins with transparency and grounding. Mayani stresses that these principles are not abstract ideals, but operational necessities.
“Responsible AI isn’t optional in healthcare,” he states. It’s embedded in how GHX trains its AI models, especially those designed to predict the on-time delivery of surgical supplies, which are crucial for patient outcomes.
To ensure the highest level of reliability, GHX’s AI models are trained on a diverse range of data:
This comprehensive data approach allows GHX to build systems that not only optimize supply chain logistics but also anticipate and mitigate real-world disruptions, delivering tangible value to hospitals and, ultimately, patients.
One of the most compelling points Archie Mayani makes in the discussion is that AI must explain its logic with the clarity and accountability of a trained clinician. This is especially important when dealing with life-critical healthcare decisions. At GHX, every disruption prediction produced by their AI system is accompanied by a confidence score, a criticality ranking, and a clear trace of the data sources behind the insight.
“If you can’t explain it like a good clinician would, your AI model is not going to be as optimized or effective.”
This standard of explainability is what sets high-functioning healthcare AI apart. It’s not enough for a model to provide an output; it must articulate the “why” behind it in a way that builds trust and enables action from healthcare professionals.
Mayani also reflects on historical missteps in healthcare AI to highlight the importance of data diversity and governance. One case he references is early AI models for mammogram interpretation. These systems produced unreliable predictions because the training data lacked diversity across race, ethnicity, and socioeconomic background.
This led to models that “hallucinated”, not in the sense of whimsical errors, but with serious real-world implications. For example, differences in breast tissue density between African American and Caucasian women weren’t properly accounted for, leading to flawed diagnostic predictions.
To counteract this, GHX emphasizes:
This commitment helps ensure AI tools in healthcare are equitable, reliable, and aligned with patient realities, not just technical possibilities.
The conversation also touches on a universal truth in AI development: the outputs of any model are only as good as the inputs provided. As Anthony notes, AI doesn’t absolve humans of accountability. Instead, it reflects our biases and decisions.
“If an AI model has bias, often it’s reflective of our own societal bias. You can’t blame the model; it’s showing something about us.”
This reinforces a central thesis of the episode: Responsible AI begins with responsible humans; those who train, test, and deploy the models with intention, transparency, and care.
As AI becomes more embedded in healthcare, public fear and discomfort are natural reactions, particularly when it comes to technologies that influence life-altering decisions. Anthony captures this sentiment, noting that any major innovation, especially in sensitive sectors like healthcare, inevitably raises concerns.
Archie Mayani agrees, emphasizing that fear can serve a constructive purpose. “You’re going to scale these agents and AI platforms to millions and billions of users,” he notes. “You better be sure about what you’re putting out there.” That fear, he adds, should drive greater diligence, bias mitigation, and responsibility in deployment.
The key to overcoming this fear? Transparency, communication, and a demonstrable commitment to ethical design. As Mayani and Anthony suggest, trust must be earned, not assumed. Building that trust involves both technical rigor and emotional intelligence to show stakeholders that AI can be both safe and valuable.
With a strong foundation in ethical responsibility, the conversation shifts to a pressing concern: scaling agentic AI models in healthcare environments. These are AI systems capable of autonomous decision-making within predefined constraints, highly useful, but difficult to deploy consistently at scale.
Mayani draws an apt analogy: scaling agentic AI in healthcare is like introducing a new surgical technique.
“You have to prove it works, and then prove it works everywhere.”
This speaks to a fundamental truth in health tech: context matters. An AI model trained on datasets from the Mayo Clinic, for example, cannot be transplanted wholesale into a rural community hospital in Arkansas. The operational environments, patient demographics, staff workflows, and infrastructure are vastly different.
For product leaders like Mayani, scale and monetization are the twin pressures of modern AI deployment. And in healthcare, the cost of getting it wrong is too high to ignore.
To illustrate how agentic AI can be successfully scaled in healthcare, Archie Mayani introduces one of GHX’s flagship products: Resiliency Center. This tool exemplifies how AI can predict and respond to supply chain disruptions at scale, offering evidence-based solutions in real time.
Resiliency Center is designed to:
These “near-neighborhood” product recommendations are not only clinically valid, but context-aware. This ensures that providers always have access to the right product, at the right time, at the right cost, a guiding principle for GHX.
“The definition of ‘right’ is really rooted in quality outcomes for the patient and providing access to affordable care, everywhere.”
This operational model is a clear example of scaling with purpose. It reflects Mayani’s earlier point: you can’t scale effectively without training on the right datasets and incorporating robust feedback loops to detect and resolve model inaccuracies.
As the conversation shifts to the nature of healthcare data, Anthony raises a key issue: data fragmentation. In healthcare, data often exists in disconnected silos, across hospitals, systems, devices, and patient records, making it notoriously difficult to use at scale.
Mayani affirms that overcoming this fragmentation is essential for responsible and effective AI. The foundation of scalable, bias-free, and high-performance AI models lies in two critical pillars:
“All of that, scaling, performance, bias mitigation, it ultimately comes down to the diversity and governance of the data.”
This framing offers a critical insight for healthcare leaders and AI practitioners alike: data is the bedrock of trustworthy AI systems in medicine.
One of the most illustrative examples of data diversity’s value came when GHX’s models flagged a surgical glove shortage in small rural hospitals, a disruption that wasn’t immediately visible in larger healthcare systems. Why?
This nuanced insight could only emerge from a truly diverse dataset. As Archie Mayani explains, if GHX had only trained its models using data from California, it might have overlooked entirely seasonal and regional challenges, like hurricanes in the Southeast or snowstorms in Minnesota, that affect supply chains differently.
“Healthcare isn’t a monolith. It’s a mosaic.”
That mosaic requires regionally relevant, context-sensitive data inputs to train agentic AI systems capable of functioning across a broad landscape of clinical settings.
Diversity in data is only part of the solution. Trust in data sources is equally critical. Archie points out a fundamental truth: not all datasets are equally valid. Some may be outdated, siloed, or disconnected from today’s realities. And when AI systems train on these flawed sources, their predictions suffer.
This is where GHX’s role as a trusted intermediary becomes essential. For over 25 years, GHX has served as a neutral and credible bridge between providers and suppliers, earning the trust required to curate, unify, and validate critical healthcare data.
“You need a trusted entity… not only for diverse datasets, but the most accurate, most reliable, most trusted datasets in the world.”
GHX facilitates cooperation across the entire healthcare data ecosystem, including:
This integrated ecosystem approach ensures the veracity of data and enables more accurate, bias-aware AI models.
Anthony aptly summarizes this insight as a two-pronged strategy: it’s not enough to have diverse datasets; you also need high-veracity data that’s trusted, updated, and contextually relevant. Mayani agrees, adding that agentic AI cannot function in isolation; it depends on a unified and collaborative network of stakeholders.
“It’s beyond a network. It’s an ecosystem.”
By connecting with EMRs, ERPs, and every link in the healthcare chain, GHX ensures its AI models are both informed by real-world variability and grounded in validated data sources.
Archie Mayani makes an important distinction between classical AI and agentic AI in healthcare. For decades, classical AI and machine learning have supported clinical decision-making, especially in diagnostics and risk stratification. These systems helped:
“We’ve always leveraged classical AI in healthcare… but agentic AI is different.”
Unlike classical models that deliver discrete outputs, agentic AI focuses on workflows. It has the potential to abstract, automate, and optimize full processes, making it uniquely suited to address the growing pressures in modern healthcare.
Mayani highlights the crisis of capacity in today’s healthcare systems, particularly in the U.S.:
In this context, agentic AI emerges as a co-pilot. It supports overburdened staff by automating routine tasks, connecting data points, and offering intelligent recommendations that extend beyond the exam room.
One of the most compelling examples Mayani shares involves a patient with recurring asthma arriving at the emergency department. Traditionally, treatment would focus on the immediate clinical issue. But agentic AI can see the bigger picture:
With this information, the healthcare team can address the root cause, not just the symptom. This turns reactive treatment into proactive, preventative care, reducing waste and improving outcomes.
“Now you’re not treating a condition. You’re addressing a root cause.”
This approach is rooted in the Whole Person Care model, which Mayani recalls from his earlier career. While that model once relied on community health workers stitching together fragmented records, today’s agentic AI can do the same work; faster, more reliably, and at scale.
Ultimately, Mayani envisions agentic AI as a full-fledged member of the care team, one capable of:
This marks a paradigm shift, from episodic, condition-focused care to integrated, data-driven, human-centered healing.
One of the most transformative promises of agentic AI in healthcare is its ability to identify root causes faster, significantly reducing both costs and systemic waste. As Anthony notes, the delay in getting to a solution often drives up costs unnecessarily, and Mayani agrees.
“Prevention is better than cure… and right now, as we are fighting costs and waste, it hasn’t been truer than any other time before.”
Agentic AI enables care teams to move from reactive service delivery to proactive problem-solving, aligning healthcare with long-promised, but rarely achieved, goals like holistic and whole-person care. The way Mayani describes it, this is now a practical, scalable reality.
Looking back at the COVID-19 pandemic, Mayani reflects on one of the biggest shocks to modern healthcare: supply chain collapse. It wasn’t due to a lack of data; healthcare generates 4x more data than most industries. The failure was one of foresight and preparedness.
“The supply chain broke not because we didn’t have the data, but because we didn’t have the foresight.”
This crisis has become a compelling event that has accelerated innovation. GHX’s own AI-driven Resiliency Center now includes early versions of systems that can:
Mayani likens this transformation to going from a smoke detector to a sprinkler system; not just identifying the problem, but acting swiftly to stop it before it spreads.
COVID-19 may have been an unprecedented tragedy, but it forced healthcare organizations to centralize data, embrace cloud infrastructure, and accelerate digital transformation.
Before 2020, many health systems were still debating whether mission-critical platforms should move to the cloud. Post-crisis, the conversation shifted from adoption to acceleration, opening the door to advanced technologies like AI and GenAI.
“Necessity leads to innovation,” as Anthony puts it, and Mayani agrees.
The result is a more resilient, more responsive healthcare system, better equipped to navigate future challenges, from pandemics to geopolitical shifts to tariff policy changes. GHX now plays a pivotal role in helping suppliers and providers understand and act on these evolving variables through data visibility and decision-making intelligence.
While agentic AI offers powerful capabilities, hallucinations remain a significant risk, particularly in healthcare, where errors can have devastating consequences. Archie Mayani openly acknowledges this challenge: even with high-quality, diverse, and rigorously governed datasets, hallucinations can still occur.
Drawing from his early work with diagnostic models for lung nodules and breast cancer detection, Mayani explains that hallucinations often stem from data density issues or incomplete contextual awareness. These can lead to outcomes like:
Both are catastrophic in their own way, and both highlight the need for fail-safes and human oversight.
To mitigate these risks, GHX employs a multi-layered approach:
This framework ensures that AI earns trust through performance, reliability, and responsibility, not just promises.
When asked to predict the future of agentic AI in healthcare, Mayani presents a powerful vision: a world where AI becomes invisible.
“When AI disappears… that’s when we’ve truly won.”
He envisions a future where AI agents across systems, such as GHX’s Resiliency AI and hospital EMRs, communicate autonomously. A nurse, for instance, receives necessary supplies without ever placing an order, because the agents already anticipated the need based on scheduled procedures and clinical preferences.
This is the true potential of agentic AI: not to dazzle us with flashy features, but to blend so naturally into the work that it disappears.
As AI becomes more embedded in daily life, public perception is shifting from fear to discovery, and now, toward normalization. As Mayani and Anthony discuss, many people already use AI daily (in smartphones, reminders, and apps) without even realizing it.
The goal is for agentic AI to follow the same path: to support people, not replace them; to augment creativity, not suppress it; and to enable higher-order problem-solving by removing repetitive, predictable tasks.
“It’s never about the agents taking over the world. They are here so that we can do the higher-order bits.”
The future of healthcare lies not in whether AI will be used but how. And leaders like Archie Mayani at GHX are laying the foundation for AI that is ethical, explainable, resilient, and invisible.
From predicting disruptions and recommending evidence-based alternatives to coordinating care and addressing root causes, agentic AI is already reshaping how we deliver and experience healthcare.
The next chapter is about when it quietly steps into the background, empowering humans to do what they do best: care.