Tell HN: Dont use Claude Design, lost access to my projects after unsubscribing

184 points by pycassa 5 hours ago


I wanted to try codex after 5 months of claude code max subscription. And then I went back to my previous projects on claude design only to realize I don't have access to them anymore.

This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.

I actually wanted to try out codex previously, but had similar experience with my credits. They gave extra credits equivalent to my montly subscription price, with some time limit because claude has so many issues that month. And as soon as plan ended. I lost access to the credits. Even after resubscribing, I still don't have access to those credits.

I have sympathies towards the engineers, especially the ones that are putting themselves on X. But only when someone with large following has some issue, they sort it out.

Having worked at a billing company, I can see how complex contracts sound good for the growth/sales folks but are also horrible for engineers actually implementing those contracts. Their complex rate limiting which is now a norm, identifying other harnesses to count them against extra usage are all probably not easy to implement without very rough edge cases. But all the "bugs" are just where the user gets screwed is what is problematic.

I just wanted to post this here, after tagging them multiple times on X to alert other users.

Topfi - 3 hours ago

It is still there and you may get it easily.

If you export your data [0] all your Claude Design chats are in a design_chats directory along with the code, even if your account currently has no access to Claude Design. It is .json, but converting that into usable code is easily done, either manually or by asking any fairly modern LLM via OpenCode. Just did it myself, it works. I will say that I'd still prefer if they allowed API use of Claude Design, it does have some niceties regarding the way follow up questions have been implemented that I feel make it worth it for very narrow UX experimentation but can't justify a whole sub at the moment, given I for the first time started experiencing regressions up to making Opus unusable via Claude Code with the Max subscription for the first time and the new pretrain in GPT-5.5 is very strong for very specific coding use cases. In fairness though, compaction and task adherence can be inferior compared to GPT-5.4 which did both better than any other model ever, so using both for their specific use cases is my go to.

Not feeling like commenting on every statement regarding SaaS and expectations, but I will say that some are mistaken/not considering the law and your rights by just telling you it is your fault and (at least) implying the data is lost. It can't be, think about it. Any temporary subscription cancellation/payment processing issue/bug on Anthropics part/etc. would mean permanent data loss. That'd be less than ideal, not least because Anthropic has in the past had trouble processing payments from verifiably covered accounts.

Users in consumer friendly area have the right to export and access their data, including data not exposed via any frontend or API if associated with their account. Doesn't matter whether they pay or not. Course, manual backups are always preferable. A provider could still have a data loss, but as long as they have the data, at least in my neck of the woods they have to give it to you. As it should be.

To end, I generally try not to comment on others comments or down outside of actual spam and bad faith, but if more than one comment already was helpful enough to tell OP that they should have exported/backed up, do we really need it repeated?

[0] https://claude.ai/settings/data-privacy-controls

jjcm - 4 hours ago

A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Even Figma went through this kind of thing very early on.

To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css.

If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design.

I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space.

goekjclo - 4 hours ago

Cannot help but think Claude team are busy adding gimmicky side features instead of doing 'real' RSI and bugfixing.

lucasgw - 3 hours ago

I have been using Claude Design + Claude Code, and results have been excellent. I have explicit clean-up instructions in Claude Code, and the handoff skill in Claude Design is pretty solid.

I've been on product launches many times, so can drive the design side appropriately and keep things focused. Has been a wonderful addition to my workflow.

As usual with any agent-driven tool - GIGO. If the human driving has no product experience and is blindly accepting designs, well, that's... a choice.

logickkk1 - 3 hours ago

tbh, backups matter. but nobody would accept Word deleting your files when you cancel Office. somewhere along the way we stopped distinguishing backup from custody.

jrumbut - 3 hours ago

I've lost access to plenty of Claude stuff without canceling anything. I am careful not to leave anything important in there and back up regularly.

It's funny because sometimes it will remember stuff that is lost and not be able to reference stuff that is clearly visible.

One area where I find ChatGPT superior (and this is just my own experience) is not losing things and also respecting project boundaries. Claude projects just seem to be a way to lose things faster, the model seems to be entirely unaware of projects as a concept.

ruguo - 3 hours ago

It’s pretty outrageous to lock out all your history just for canceling the subscription.

DaryaHr - 32 minutes ago

I`m sorry, that`s horrible situation. Coming from IT companies myself, I hope, that most likely they will receive your issue at some point and will act on it. Unfortunately, with all that fast paced development and race after new shiny thing to win user, companies might sacrifice quality. Only users can make an impact here - while we are buying new shiny stuff, things will happened. Investment in quality is expensive.

conception - 4 hours ago

Backup data that’s important to you.

Leonard_of_Q - 3 hours ago

Did you get a warning about your data being /dev/nulled with an admonition to download whatever you wanted to keep before unsubscribing? If you did, well... should'a heeded that warning to make backups, shouldn't you? If you did not get a warning I'd add that it would have been more customer-friendly for Anthropic to warn you about your data disappearing after unsubscription but I still think that you should have made sure you downloaded whatever data you wanted to keep around before `throwing the key in the mailbox'. Don't ever trust third parties to care for your data like you care for it, make sure to keep it somewhere you are sure you can get at no matter what.

alyxya - 3 hours ago

I also encountered an issue with my credits. I was previously subscribed to the max plan, claimed credits, then downgraded to the pro plan and noticed I lost my credits. I didn't unsubscribe, just downgraded plans as I wasn't using claude enough to justify needing max.

Havoc - 3 hours ago

>This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.

It's not entirely unprecedented - seen these tactics in the google ecosystem. Google music. Unsubscribe killed(kills?) access to see you playlists which of course you only learn once it's done. Give them a credit card again and you can see and export them again. Magic!

Resubscribed for 1 month, exported it, unsubscribed, and swore to never trust google music again. idk why they implement patterns like that because sure you extorted $10 in cash out of me but it makes the brand toxic. There is no way that decision has a net positive future value. Hell it even got them a pissed hn post years later

Animats - 4 hours ago

When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.

coder97 - 2 hours ago

Looks like I need to do a daily backup of the artifacts generated.

parliament32 - 4 hours ago

So.. you unsubscribed from a SaaS and expected them not to purge your data? Why would that make sense?

Anthropic may be a bunch of skids but it sounds like they did the right thing here. Pretty much all SaaS applications, especially in B2B, are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.

zuzululu - 4 hours ago

Not your server not your data

Uptrenda - 4 hours ago

Aside from OP's post there's another issue with claude design worth mentioning. Yes, it makes absolutely beautiful designs, stunningly so, but the actual code is not something a human could ever maintain. So its like ending up with an opaque blob. Write-once, read-never, or almost disposal code. This is kind of bad because code people aren't going to bother to read might contain vulnerabilities.

It's an extreme example of slop code since while normally LLMs can produce code that ranges from some-what-okay to utter garbage, the web code claude makes is awful. On the other hand: you get a single file (even if it is full of 20+ embedded SVGs, javascripts, and other such things.)

trq_ - 2 hours ago

[dead]

- 2 hours ago
[deleted]
nateparrott - 2 hours ago

[dead]

- 3 hours ago
[deleted]
sieabahlpark - 2 hours ago

[dead]

comboy - 4 hours ago

Sorry but that one is on you. This sounds like expected behavior and I wouldn't blame any company for doing that.

wiseowise - 4 hours ago

And AI hypers suggest to build your whole career/identity on this shit. Already foresee "skill issue", "well you should've x, y, z, obviously", etc.