Wise Thoughts May 2025 | AI Automation On Deck

Overcoming confirmation bias when it comes to AI

I am often learning about and testing AI use cases with a mind to improve my own work and imagine implications for the future. When discussing with others, especially peers in healthcare, I often met with a story about a failed AI experience such as a chatbot hallucination or an incorrect CPT code, usually coming to the conclusion that AI isn't worth all of the hype, and it’s a long way before it can replace humans.

This speaks to form of cognitive bias, confirmation bias,  where people tend to seek and remember evidence that supports our own theory or preference. Because it is difficult to think about machines being as good as humans, or because we don’t want to lose a critical jobs for an organization or community, it can be satisfying to recount evidence that confirms our preference that we have a long time horizon before we have to worry about AI replacing our current workforce.

We need to be clear-eyed and non-defensive when we look at AI’s progress and potential.

Exponential progress

But these anecdotes treat technology as though it is static or slow-moving. However, we know that AI is evolving at a rate not seen with other emerging technology, and this progress is not slowing down. We are well into the deployment of powerful AI models, and yet their capabilities continue to grow at a rate that is difficult for most of us to fully grasp. The development curve is exponential, meaning the improvements come faster and faster over time. Because this kind of growth is hard to visualize, researchers often use logarithmic scales in their graphs, which smooth the curve and make it appear more gradual. But in reality, the underlying progress is steep. Each new model isn't just incrementally better—it often represents a major leap forward. So when we base our assumptions on what today's AI can or can't do, we risk underestimating how quickly those boundaries are moving.

A recent episode of Exponential View with Azeem Azhar interviews Steve Hsu, about his startup, Superfocus (you can listen to the episode here: The Difference Between Early and Late AI Adopters). It highlights a model that directs a large language model to respond only from a fixed, trusted data source. This eliminates hallucinations while keeping the flexibility and depth of a conversational agent. According to the discussion, this approach could replace 80 to 90 percent of call center agents.

Such AI systems built upon Large Language Models (LLMs) can interpret diverse queries, access multiple data sets, and deliver personalized answers with accuracy. These systems are already beginning to outperform the human standard in speed, availability, and precision. And rather than requiring customers/consumers to navigate a clunky script, they can interact with the model by talking in their natural language and hear responses back in the same manner.

I love humans

Let me be clear. I am not saying that a technology solution is better and I am not arguing that AI should replace humans. In fact, I often yearn for a simpler world, one with fewer screens and more direct interaction. But our feelings about technology do not stop its progress.

Overriding our assumptions and even our own experiences

The example above focuses on customer service, but it also points to something larger. The promise of AI automation is going to challenge our assumptions in virtually every part of healthcare: administration, scheduling, documentation, clinical decision support, outreach, and more.

Many healthcare organizations pride themselves on the human touch. Some see it as a differentiator to have a local person always answer the phone. But if a competitor offers 24/7 conversational support through AI that is more accurate and less costly, how does this affect your own operating model? What happens when consumers begin to expect that level of response because they have already experienced it elsewhere?

This is not an argument to erase the human element – it will continue to matter. So, if this technology is already available and expanding, rather than preserving how we have always done things, we can ask ourselves some new questions:

  • How can we bring it into our organizations in ways that reflect our mission, our culture, and our communities?

  • How can we design for efficiency and value without losing what makes our institutions trusted?

  • Are we checking the AI box with incremental improvements or are we truly looking at bold steps that can improve the patient / consumer experience while meaningfully increasing access and lowering costs?

  • How do we ensure that our business model remains efficient enough to continue delivering on our mission in the healthcare landscape of the next era?

I know that many you reading this are well advanced on the AI journey, and could teach me everything above and more. And, others of you are still getting started. Wherever you are along the spectrum, I’d love to continue this dialogue.

With you in goodness,

Nancy