Apr 16
2026
Who’s Measuring What AI Truly Fixes Within the Income Cycle?

By Inger Sivanthi, CEO, Droidal Healthcare Options.
Each few months, one other well being system publicizes it has deployed synthetic intelligence throughout its income cycle. The press launch follows a well-known script: decreased denials, fastero authorizations, workers hours reclaimed, effectivity unlocked. What virtually by no means seems in that announcement is a second doc, the one which defines how the group will know, 12 months from now, whether or not any of that’s truly true.
That absence will not be an accident. It displays one thing deeper about how healthcare has traditionally handled its administrative infrastructure: as an issue to handle relatively than a system to grasp. And now, as AI instruments transfer from pilot packages into operational deployment at scale, that hole is now creating actual operational threat as AI strikes into reside manufacturing environments.
I’ve spent greater than twelve years working alongside income cycle groups, coders, billers, authorization specialists, and CFOs, and I can say with some confidence that most people closest to this work are deeply skeptical of headlines. They’ve seen know-how guarantees earlier than. They keep in mind the EHR implementations that have been presupposed to streamline documentation and as an alternative added hours to the doctor workday. They keep in mind the clearinghouse upgrades that decreased one bottleneck and created three others downstream. They aren’t cynics. They’re individuals who have realized, by way of expertise, that what a system claims to do and what it truly does inside a reside operational setting are sometimes very various things.
That skepticism will not be resistance to alter. It’s precisely the form of operational self-discipline that ought to form how AI will get evaluated and deployed.
The problem proper now’s that the trade has skipped that step. Convention levels are crowded with transformation narratives. Well being techniques dealing with tight margins and chronic staffing shortages really feel real urgency to search out operational aid. All of that’s comprehensible. However urgency with out accountability is how you find yourself automating damaged processes relatively than fixing them. And within the income cycle, damaged processes don’t simply have an effect on the steadiness sheet. They have an effect on whether or not a affected person will get a process accepted on time. They have an effect on whether or not a doctor burns one other hour on paperwork that ought to have taken ten minutes. They have an effect on the belief that suppliers, payers, and sufferers rely upon to make the system operate.
What I discover lacking in most AI deployment conversations is a simple dedication to answering a fundamental query earlier than the contract is signed: what does success appear to be, and the way will we measure it independently? By clear, pre-specified efficiency benchmarks, first-pass decision charges, authorization turnaround instances, denial overturn charges, measured in opposition to a documented baseline and evaluated at common intervals by folks contained in the group who’re empowered to say when one thing will not be working.
A part of the reason being structural. Income cycle operations in most well being techniques sit in a sophisticated organizational house, accountable to finance, related to medical operations, depending on know-how infrastructure managed by IT, and constrained by payer relationships that no person controls completely. That diffusion of accountability makes it genuinely troublesome to assign possession over AI efficiency. When a denial price creeps up six months after an AI device goes reside, the query of who’s answerable for diagnosing why, whether or not the know-how group, the RCM management, or the seller, not often has a clear reply. So the query usually goes unasked, or will get absorbed into the background noise of operational administration.
The opposite half is cultural. Healthcare administration has an extended custom of accepting complexity as inherent relatively than inspecting it as designed. Prior authorization, to take probably the most seen instance, has turn out to be so procedurally dense that many organizations have merely constructed workforces round navigating it relatively than questioning whether or not the navigation itself will be basically restructured.
The dimensions of that drawback will not be summary: in line with CMS, greater than 53 million prior authorization requests have been submitted to Medicare Benefit insurers in 2024 alone, and of the denials that have been appealed, greater than 80% have been finally overturned. AI can scale back the friction of that navigation. But when the underlying logic of the method stays unchanged, if the factors are nonetheless opaque, the payer responses nonetheless inconsistent, the documentation necessities nonetheless disconnected from medical actuality, then automation quickens a damaged system with out therapeutic it. That may be a significant distinction, and it’s one which consequence measurement frameworks must be designed to seize.
What higher apply appears like, for my part, is pretty concrete. It begins with a pre-deployment audit with a clear-eyed stock of the place the income cycle is definitely failing, not the place it appears prefer it would possibly profit from know-how. It requires that AI instruments be evaluated in opposition to these particular failure factors, with outlined thresholds for what enchancment appears like at thirty, ninety, and 100 eighty days.
It calls for that operational workers, the individuals who work inside these processes every day, have a proper mechanism to floor when a device is creating new issues, not simply fixing previous ones. And it insists that mannequin efficiency be reviewed on a scheduled foundation, as a result of the payer panorama doesn’t maintain nonetheless, and a mannequin skilled on final yr’s protection standards could also be quietly degrading in opposition to this yr’s.
None of that is technologically difficult. It’s organizationally disciplined. And that distinction issues, as a result of the conversations well being techniques have to have about AI accountability should not primarily conversations with distributors. They’re inner conversations about how critically the group intends to manipulate its personal operations.
Policymakers have a parallel duty. As federal and state consideration more and more focuses on prior authorization reform and payer transparency, there is a chance to embed consequence reporting necessities into any regulatory framework that governs automated administrative decision-making. An AI system that accelerates a payer’s denial course of with out bettering medical appropriateness will not be a healthcare innovation. It’s an effectivity device for the payer, not an enchancment in care decision-making. Regulators ought to require that distinction to be measurable and reported, not left to vendor interpretation.
The potential right here is actual. The income cycle absorbs a unprecedented share of healthcare assets, assets that might in any other case help direct affected person care, workforce retention, or capital funding in underserved communities. Considerate AI deployment, ruled by rigorous measurement, can liberate significant capability throughout the system. I’ve seen it work in contained, well-designed implementations. The issue will not be that the know-how can’t ship. The issue is that with out accountability frameworks, we won’t truly know when it does, and we won’t catch it when it doesn’t.
Healthcare has spent years debating what AI can do. It’s previous time to construct the infrastructure to search out out what it’s doing.
