Launch HN: Parachute (YC S25) – Guardrails for Clinical AI

61 points by ariavikram 2 days ago


Hi HN, Aria and Tony here, co-founders of Parachute (https://www.parachute-ai.com/). We’re building governance infrastructure that lets hospitals safely evaluate and monitor clinical AI at scale.

Hospitals are racing to adopt AI. More than 2,000 clinical AI tools hit the U.S. market last year - from ambient scribes to imaging models. But new regulations (HTI-1, Colorado AI Act, California SB 3030, White House AI Action Plan) require auditable proof that these models are safe, fair, and continuously monitored.

The problem is, most hospital IT teams can’t keep up. They can’t vet every vendor, run stress tests, and monitor models 24/7. As a result, promising tools die in pilot hell while risk exposure grows.

We saw this firsthand while deploying AI at Columbia University Irving Medical Center, so we built Parachute. Columbia is now using it to track live AI models in production.

How it works: First, Parachute evaluates vendors against a hospital’s clinical needs and flags compliance and security risks before a pilot even begins. Next, we run automated benchmarking and red-teaming to stress test each model and uncover risks like hallucinations, bias, or safety gaps.

Once a model is deployed, Parachute continuously monitors its accuracy, drift, bias, and uptime, sending alerts the moment thresholds are breached. Finally, every approval, test, and runtime change is sealed into an immutable audit trail that hospitals can hand directly to regulators and auditors.

We’d love to hear from anyone with hospital experience who has an interest in deploying AI safely. We look forward to your comments!

jph - 2 days ago

Congratulations Aria & Tony, this is much needed for healthcare. I work at UK NHS Wales in software engineering, and would be happy to talk with you personally, and also happy to introduce you to the NHS Wales AI team. Peronal email joel@joelparkerhenderson.com, work email joel.henderson@wales.nhs.uk. And we're hiring-- if anyone here is keen to code for social good healthcare, email me.

pizzathyme - 2 days ago

This is exactly the type of company that I like to see: - Sounds very complicated/thorny to navigate (regulatory, medical, compliance) - Not super "sexy", which keeps competition lower - Clear pain points (fines) for customers that can and are willing to pay (hospitals)

Next up is just great execution by you all!

That list of logos you all have - are those paying customers today?

Best of luck!

potatoman22 - 2 days ago

This is cool, but I’m a little skeptical. If Parachute uses AI agents to evaluate other models, who’s evaluating the AI agents? It’s hard to imagine it’s safe to entrust model validation and bias assessments to an automated system, especially in healthcare. Validating clinical AI is pretty complex between finding the right data, ensure event timings are accurate, simulating the model, etc. That’s why I’m guessing Parachute is a little less automated than the landing page makes it out to be, which is maybe a good thing. Regardless, this is cool. Hope you make AI in healthcare more safe.

seriusam - 2 days ago

How did you get the numbers on your landing page? It looks like an AI generated product with AI generated "safety". Just like the "2000 clinical AI tools" that hit the US market, this looks like one of the "2000 governance tools" that hit the market. How are you vetting every AI Scribe tool so your product itself isn't biased ? Have you done any work with the companies you have listed in your landing page? It looks like a governance tool that the "trying-to-be" scribe companies would use to not get legit audits.

zmmmmm - a day ago

> Parachute evaluates vendors against a hospital’s clinical needs and flags compliance and security risks before a pilot even begins

this is humans? I'm really not sure how this could be automated given the vast spectrum of applications and specific requirements complex organisations like hospitals have. It would have to boil down to "check box" compliance style analysis which in my experience usually leads to poor outcomes down the track (the worst product from every other point of view gets chosen because it checks the most arbitrary boxes on the security / compliance forms - then the integration bill dwarfs whatever it would have cost to address most of those things bespoke anyway).

richwater - 2 days ago

> auditable proof that these models are safe, fair

Impossible to deliver

iamgopal - 2 days ago

Are you guys using AI to check on AI ?

cactca - 2 days ago

These are extraordinary claims for a rapidly evolving field with a huge breadth of intended uses and technologies.

Here are a few questions that should be part of an evaluation of the Parachute platform to pressure test the claims made on the website and this post: 1) How many Parachute customers have passed regulatory audits by CMS, OCR, CLIA/CLAP, and the FDA? 2) What high quality peer-reviewed scientific evidence supports the claims of increased safety and detection of hallucinations and bias? 3) What liability does Parachute assume during production deployment? What are the SLAs? 4) How many years of regulatory experience does the team have with HIPPA, ISO, CFR, FDA, CMS, and state medical board compliance?

fehudakjf - 2 days ago

Where are your promises or goals of addressing the fear that these large language model medical paperwork assistants will be implanting subtle time bombs into their reports.

We've all seen how powerful language can be in legal defenses surrounding the for profit healthcare industry of the united states.

What new "pre-existing conditions" alike thought, and legal argument, terminating phrases will these large language models come up with for future generations?