Medicare's new payment model is built for AI. Most of the tech world has no idea
techcrunch.com63 points by brandonb 5 hours ago
63 points by brandonb 5 hours ago
I run a YC startup that was accepted to Medicare ACCESS.
Historically, insurance has paid for activity: time spent in visits, RVUs generated, and minutes logged. This was a reasonable starting point, but the flaw is that there's no strong incentives to be efficient.
ACCESS is explicitly a "deflationary" approach. Medicare has set the payment rates high enough to be viable for startups, but low enough that you have to use software (including AI) to deliver a large part of your program.
So Medicare has basically created economic incentives to reward software without prescribing the exact shape of the programs. I thought it was a really interesting approach and builds on 15 years of lessons from CMMI (Medicare's innovation group).
I would maybe modify this to say - there is a strong incentive to be efficient - you only make so much money per encounter, DRG visit to the hospital, etc. So the pressure from "management" on a lot of us clinicians is to see more people per day, make each hospital visit as short as possible, etc. Medicaid providers now see something like 50-60 patients a day because the per-patient visit is relatively low. But there isn't as much incentive for outcomes. I think CMS has tried it in the past, but with varying success. Whether this new mousetrap will work, who knows.
The existing CPT codes (roughly) pay proportionately to physician time (RVUs). So I wouldn't say there's an an incentive toward delivering care efficiently, but rather hospital management wants to maximize billable hours.
Oh no, there's both. At least for consultations, there's only 3 inpatient / 5 outpatient levels of CPT codes which work for both complexity and/or time. And patients tend to be pretty complex, so it'd not hard to justify a level 4 or 5 CPT code; any less than that and the patient usually has absolutely nothing wrong with them. And at best, max complexity, Medicare pays, something like $227 per CPT code. So to keep the lights on, you'd better figure out a way to see 14, 16, 20 patients a day...a practice cannot stay afloat if you take 45 mins to an hour to see a level 5 CPT code.
For hospital stays, I may be outdated in this, but Medicare pays a lump sum DRG which doesn't tend to go up much, so the longer the patient is in the hospital, the less money the hospital makes.
Short story is the biggest pressures from the higher-ups is for us to see more volume outpatient, and cut duration of stays inpatient....
Why isn't this vulnerable to the upcoding problem that plagues medicare advantage plans?
>rewards health outcomes rather than required activities… earn the full amount only when patients meet measurable health goals, like lower blood pressure or reduced pain
They’ll just start cherry picking their patients, finding ways to squeeze out the people just that little bit lower on the prognosis curve. Or at least that will be the risk in a setup like that.
You see upcoding in risk-based programs already too (like Medicare Advantage). It's trivial to say a patient is sicker than they are and then let them "miraculously improve".
Real bummer.
That's the biggest risk. I can also imagine it encouraging a certain kind of fraud.
The program sounds reasonable until you become aware that the patients most in need are often the ones least likely to improve. It also ignores the reality that sometimes even the most rigorous, well reasoned treatment plans fail for unpredictable reasons. Do you punish providers and patients for that?
One might argue that that's the goal. There's the approach we've taken of trying to help people, and then there's the approach some people want, which is to treat every problem as if it is an entirely individual problem and treatment has to be earned by trying to will yourself out of the problem.
Billing Medicare for services that never get performed is a particularly offensive kind of fraud. I hope these criminals get a harsh sentence.
I think this is less offensive than the kind of scheme where a hospital billed for doing actual heart surgery on patients who didn’t have heart problems. https://www.sfgate.com/health/article/REDDING-Settlement-in-...
Is it basically about giving as much data as possible to insurance companies?
Medicare is a government-run insurance program, so this is one of the few cases where a private insurance company wouldn't receive data.
(There is such a thing as Medicare advantage, where a patient can choose to put their Medicare dollars toward private insurance, but it's not part of the initial launch of this program.)
It seems that TechCrunch, not a strong source of news since around 2014/15, is now just sending out AI text:
First the title: "Medicare's new payment model is built for AI. Most of the tech world has no idea", classic AI tell. The by-line is by the editor-in-chief.
Em-dashes everywhere, including in this quote, somewhat unusually: “The best solution wins, which, in regulated industries like healthcare — that’s not been the case.”
Oddly-short paragraphs: "That payment structure is the real news."
Rule of threes: "Pair Team launched in 2019 with a specific kind of patient in mind: people managing chronic conditions who were also dealing with unstable housing, too little food, or lack of transportation"
This whole paragraph: "There are real risks. Participants are feeding extraordinarily sensitive patient data — intimate conversations about housing and diseases and mental illness — into a federal infrastructure with a documented history of breaches, including exposed Social Security numbers. For the vulnerable populations ACCESS is designed to serve, that's not an impractical concern."
---
I haven't opened a TC article in years and I think I'll return to that practice.
I think there's an ongoing conversation about whether we should accept all LLM-generated text without commentary.
I write this comment because I have some sympathy for a Show HN with AI-assisted writing, but I will not spend time enriching TechCrunch's use of machine-generated text anymore than I would scroll through an ad block at the end of any other article.
(Just for the sake of comparison, here's something by the same writer from a few years ago - https://techcrunch.com/2022/11/16/boompop-gains-traction-by-...
You can see more examples here, too https://techcrunch.com/author/connie-loizos/page/16/ )
These are also the markers of human journalists who write daily. Journalism is the reason AI acquired these habits. Gemini says this article is probably not generated by AI, particularly because it has original quotes.
Personally I wouldn't cite Gemini for this because I have no idea if it has any kind of track record of accurately distinguishing human from AI writing.
That said, Pangram agrees and its track record is pretty good.
> particularly because it has original quotes.
I'm not saying the quotes are fake, that would be horrific. I'm saying the rest of the article appears to have had minimal human intervention.
At some point, however distasteful to the naturalists, do we accept that writing with AI is still writing? There will be an arms race the way there was moving from banner ads -> whatever hellscape we have today ...
Then why did you point to the em-dash in the quote as evidence of AI authorship?
Isn't the first em dash taken from an interview that the writer did with the subject over Zoom? I think using an em dash to punctuate a broken or partial sentence like that is pretty standard journalistic practice when you don't want to modify the original quotation (e.g, denote a paraphrase with brackets), and definitely not an AI tell.
The other uses are honestly pretty standard rhetorical patterns; they do not seem especially AI-flavored to me.
I got an LLM to analyse all of my messages and e-mails from the launch of gmail to work out my writing style, it says I heavily favour em-dash's. I used to work in the industry of type settings and press and publishing. I even use — in HTML when I have to write it nowadays. em-dash is not a LLM thing. It's just most people don't know how to use it. It also said I'm wry. Go figure.
Language is leaky, it gets just about everywhere. Some LLM goes and spills a bunch of emdashes and subordinate clauses all over a billion folks’ browsers and a bunch of them— especially those that may come into contact with a lot of language for a living— writers, for example— and they soak up a bit of it themselves and smear it all around.
Put another way, search out the great vowel shift. That happened over more time but then again the contact with different speakers wasn’t as constant as every day on the internet. It’s just what happens, how things spread. No different and maybe to a further degree than typical memes.
My suspicion is that the causation mostly goes the other way—LLMs write like that for the same reason that many humans do, namely, that it's a cheap trick for sounding smart with limited effort and cognitive capacity. (My guess would be that em-dash usage among human writers is down in the LLM era because people don't want to be accused of being LLMs, though I don't have any data on this.)
Coincidentally I just read a blog post today that explained this in a way I always struggled to: https://www.astralcodexten.com/p/nostalgebraists-hydrogen-ju...
Pangram considers this text human written.
Do you consider it human written? We can't let machines take over our thought.
And if we're using machines to assess this, the appropriate action is to look at the author's writing from before the time of LLMs and compare it to now.
On the contrary, when a machine has been shown to outperform human judgment at a specific task, you should trust it over your own gut feeling, especially if you have no particular training or track record at the task.
There've been third-party evaluations of Pangram, e.g., https://bfi.uchicago.edu/wp-content/uploads/2025/09/BFI_WP_2.... I personally do not think I could achieve that rate of accuracy, if you made me read a bunch of text samples and guess whether humans or AIs wrote them. Do you think you could?
Also "X is the real Y" is another tell. Surprised it didn't double down with "X—that's the real Y."
>The company's premise was that you can't improve health outcomes without addressing the full context of someone's life
They are absolutely correct about this mathematically, you can’t solve problems you don’t have data for
The question is what organization would I trust with the full context of my life. None. Zero.
**future headline: Consumer warning: The panopticon(tm) product is embedded into your care plan, insurance is only available for panopticon subscribers.
> The first call that shifted his thinking was with a 67-year-old woman living out of her car, managing PTSD and congestive heart failure. She spoke with Flora for over an hour. "It was both incredible and depressing," Batlivala told me. "Flora was probably the only 'person' she'd talked to in weeks about her situation." Now, hourlong conversations with Flora are routine. "That's the companionship piece," he said. "And it turns out that is truly an intervention."
People don't seem to realize that this is both coming and that before long people will be defending AI "persons" because of this reason (OpenAI is already complaining about people doing this). Nobody's going to deliver this level of care using humans. It's not going to happen.
A lot of people needing care are deeply isolated and will be of the opinion that AI changes that.
I feel the same about caretaking. Having an AI talk to people with dementia will be a godsend for families. Before he died, my dad had the same thought every 5 minutes and it slowly drove my mom crazy. A super patient AI would have helped a lot and freed up the rest of the family for other tasks.
One step further would be robots that take people to the bathroom, clean them and other stuff. Having this done by humans is either extremely expensive or it will not be done properly.
Some people are horrified by the loss of human touch but for most old people human touch is a luxury they can't afford.
I don't think it will be helpful when it is slopped together and doesn't have a real mental model to keep the dementia patient on a healthy track.
Look at all the "AI psychosis" problems with people going into a conversation loop that amplifies their worst thought patterns. Now consider the same where the person in this loop is already having delusions and other cognitive decline. It seems to me that it could spiral in the wrong direction quite easily.
It's quite difficult for human caretakers to navigate this space too. That is part of why it is so exhausting. You're constantly trying to make judgement calls and implicitly predict the unreliable response of the dementia sufferer.
I think there is a large uncanny valley between having some facsimile of human interaction in a short session and having some kind of trustworthy caretaker that can consistently respond in a way that promotes health and safety. I think it involves a lot of subjunctive interpretation and reasoning to navigate all the mixed up layers of fact, fantasy, and simply aphasic expression that come from dementia.
Every psychologist and therapist I have talked to about using LLMs in place of personal interactions (just discussion about this topic) have all said roughly the same thing:
Any attempt to use LLMs as a substitute for personal interaction is playing an incredibly dangerous game that will probably make them a lot of money, while hurting a lot of people.
You might want to read again who the patient was. Because: obviously not going to happen, no matter how bad the AI is ...
Oh and taking sycophancy out of a model is easy. Just finetune out that they (have to) agree with everything. Plus every new model has less of it, or at least masks it better.
Gross .