At ViVE 2026 in Los Angeles (Feb. 22–25), Eli Ben-Joseph, CEO and co-founder of Regard, joined Joe Evans (Sentara Health), Dr. Suchi Saria (Bayesian Health), and Nikhil Budman (Ambience Healthcare) for a panel on how health systems can unlock the full value of the data they already have to drive real clinical and financial impact. This recap synthesizes key takeaways and advice from the discussion.

Key takeaways
- Most clinicians only have time to review 3% of the data in a patient’s chart, and the other 97% goes unseen, along with the clinical and financial value it holds.
- Data quality matters more than model selection. How well an AI performs depends less on the model and more on the quality of the data being fed into it.
- Adoption stalls when the back end isn’t solved. Saving clinicians time at the point of care means nothing if coding and CDI teams are still chasing them down after the fact.
- Proving value on two fronts is non-negotiable. Health systems that agree on both clinical and financial KPIs before implementation are far more likely to see results that last.
The data sitting in health system charts is healthcare’s most underused asset
EHRs are the backbone of modern healthcare and hold everything known about a patient, yet the vast majority of the clinical intelligence captured inside them never gets used. Clinicians only have time to review 3% of available chart data, and over 80% of what’s in the chart is unstructured and out of reach of most analytics tools. As Eli Ben-Joseph noted, health systems are beginning to recognise this missed opportunity: “We’ve made these investments over the last decade in EHRs and putting these platforms together to store data … we’re now seeing health systems looking for ways to utilize that.” He recommended that health systems look for products that allow them not to just solve one particular problem, but systems that allow them to use that data for any number of solutions.
The same patient record can simultaneously generate a clinical note, support the CDI team, and ensure the billing team isn’t leaving reimbursement on the table, but most health systems are still purchasing point solutions for each function, solving them at a fraction of the efficiency possible with a unified data layer. As Joe Evans put it: ‘The first core in the clinical space is to be able to pull all that data together in an organized way — structured, unstructured, and even outside data — and create that full clinical picture.”

What separates AI that performs from AI that overpromises
That data foundation, the panel argued, is also what determines whether AI delivers in production. Health systems tend to evaluate AI vendors by their models, but the quality of the data feeding into them matters more. Ben-Joseph pointed out that “people aren’t talking about how important it is to feed those models the right data. If you don’t structure the data the right way, if the data is messy, those models aren’t performing as they could be.”
Beyond data quality, the panel drew a distinction between AI systems that summarize and extract text and those capable of genuine clinical reasoning. As Dr. Saria explained: “You need to understand cause and effect. If somebody’s got Coumadin, their INR levels are going to go up. That kind of understanding of what the different measurements are, what they mean, and how they interrelate is a fundamental area of AI you absolutely need in order to start doing real clinical intelligence and reasoning.” In high-stakes conditions like sepsis and cardiac events, this can often mean the difference between life or death.
Why so many implementations lose momentum after a strong pilot
Despite growing appetite for AI across health systems, implementation still runs into resistance at the clinical level. The cognitive load clinicians are already carrying means that for technology to be adopted it has to be, as the panel put it, “iPhone easy.” Care doesn’t happen in a silo either, and if a solution saves time at the point of care but leaves coders and CDI teams chasing clinicians to fix documentation afterwards, the net time saved is zero and adoption won’t hold. Building a complete picture of the patient at the point of care reduces that back-and-forth rather than shifting it downstream.
Evans shared that change fatigue is a common factor making implementation riskier: clinicians who have cycled through enough tools that don’t last will simply disengage, writing off each new solution as the flavour of the day before it has a chance to prove its value.

What actually drives lasting adoption
Getting AI to stick requires treating clinical adoption and administrative ROI as a single problem. Eli Ben-Joseph framed the tension directly: “It’s very challenging for companies that sit in the workflow to have to live in this balance between creating something that will clearly add clinical value, while also being able to drive an ROI that’s measurable.” His recommendation was to lock in KPIs before implementation and hold vendors accountable to both — clinical impact at the point of care and the downstream effect on quality metrics, coding, and reimbursement.
Platforms that reason across the full EHR generate returns that grow as more use cases are added, but the decision to move off an existing vendor isn’t always straightforward. As Evans noted, if a platform like Epic is close to delivering an in-demand capability, switching may not be worth it, but if they’re more than a year out and a better solution exists, health systems shouldn’t settle.
For Regard, proving value means measuring what’s captured at the point of care rather than recovered through retrospective queries after the fact, a standard that every health system should be holding their vendors to.