Switch to Claude without starting over

claude.com

506 points by doener 15 hours ago


wps - 15 hours ago

Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools.

xrd - 9 hours ago

The prompt you can copy is this:

  I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: [date saved, if available] - memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. After the code block, confirm whether that is the complete set or if any remain.
Why wouldn't a smart OpenAI PM simply add something "nefarious" on the frontend proxy to "slow down" any requests with exactly that prompt?

I bet they would get their yearly bonus by achieving their KPI goals.

outlore - 13 hours ago

I tried all of Codex, OpenCode, Claude Code and Cursor these past few weeks. It was surprising to me that all of them have slightly different conventions for where to put skills, how to format MCP servers (how environment variables need to be specified etc), what the AGENTS/CLAUDE file needs to be called, what plugins/marketplaces are...it's a big mess for anyone trying to have a portable config in their dotfiles that can universally apply to any current and future agent.

It also showed me the difference between expectation and reality...even though these are billion dollar companies, they still haven't figured out how to make lag-free TUIs, non-Electron apps, or even respect XDG_CONFIG. The focus is definitely more on speed and stuffing these tools full of new discoveries and features right now

There's a bit of psychology around models vs. harnesses as well. You can't shake off the feeling that maybe Claude would perform better in its native harness compared to VSCode/OpenCode. Especially because they've got so many hidden skills (like the recently introduced /batch), that seem baked into the binary?

The last thing I can't figure out is computer use. Apparently all the vendors say that their models can use a mouse and keyboard, but outside of the agent-browser skill (which presumably uses playwright), I can't figure out what the special sauce is that the Cloud versions of these Agents are using to exercise programs in a VM. That is another reason why there is a switching cost between vendors.

siliconc0w - 4 hours ago

I switched to Claude but the token efficiency and limits are much more noticeable. One or two coding questions and I'm at my session limit. And that is shared with chat too.

I was mostly able to get by with $20 codex but I'll probably have to splurge for the Max plan.

jspdown - 2 hours ago

I've been using Claude for a little over a year, but the recent events with DoW are making me want to explore European alternatives. I'm willing to give Devstral 2 a try, but I'm not sure what to expect. In terms of tool calling and coding abilities, should I expect something closer to Sonnet 3.5 or to Sonnet 4.5?

brikym - 14 hours ago

Hey Anthropic, how about you use AGENTS.md for one thing.

duxup - an hour ago

I don't know how complex everyone's setup is but I like starting over and exploring a bit to get a lay of the land / update preferences.

I think I redo my terminal the way I like it each new computer and so on.

Joeri - 14 hours ago

I already switched to claude a while ago. Didn’t bring along any context, just switched subscriptions, walked away from chatgpt and haven’t touched it again. Turned out to be a non-event, there really is no moat.

I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.

joshstrange - 10 hours ago

I’m pretty divided on “memory”. There are times it can feel almost magical but more often than not I feel like I am fighting with the steering wheel.

Whenever I’m in a conversation and it references something unrelated (or even related) I get the “ick”. I know how context poisoning (intentional or not) works and I work hard to only expose things to the model that I want it to consider.

There have been many times that I’ve started a fresh chat as to not being along the baggage (or wrong turns) of a previous chat but then it will say “And this should work great for <thing I never mentioned in THIS chat>” and at that moment my spidey-sense tingles and I start wondering “Crap, did it come to the conclusion it did based mostly/only on the new context or did it “take a shortcut” and use context from another chat?

Like I said, I go out of my way to not “lead the witness” and so when the “witness” can peek at other conversations, all my caution is for naught.

I encourage everyone to go read the saved memories in their LLM of choice, I’ve cleaned out complete crap from there multiple times. Actually wrong information, confusing information, or one-off things I don’t want influencing future discussions.

The custom (or rather addition to the) system prompt is all I feel comfortable with. Where I give it some basic info about the coding language I prefer and the OSes that I’m often working with so that I don’t have to constantly say “actually this is FreeBSD” or “please give that to me in JS/TS instead of Python”.

The only thing that has, so far, kept me from turning off memory is that I’m always slightly cautious of going off the beaten path for something so new and moving so fast. I often want to have as close to the “stock” config since I know how testing/QA works at most places (the further off the beaten path you, the more likely you’ll run into bugs). Also so that I can experience when everyone else is experiencing (within reason).

Lastly, because, especially with LLMs, I feel like the people that over customize end up with a fragile systems. I think that a decent portion of the “N+1 model is dumber” or “X model has really gone downhill” is partially due to complicated configs (system prompts, MCP, etc) that might have helped at some point (dumber model, less capability) but are a hindrance to newer models. That or they never worked and someone just kept piling on more and more thinking it would help.

peteforde - 12 hours ago

I got very excited when I saw this title, because I've wanted to consolidate on Claude for a long time. I have been using ChatGPT very extensively for Q&A for 2+ years and I have hundreds of long, very technical conversations which I constantly search and refer to.

The problem (for me, anyway) is that even several megabytes worth of quality "memory" data on my profile would not allow me to migrate if it can't also confidently clone all of my chat history with it.

To be clear, this is a big enough problem that I would immediately pay low three digits dollars to have this solved on my behalf. I don't really want any of the providers to have a walled garden of all my design planning conversations, all of my PCB design conversations. Many are hundreds of prompts long. A clean break is not even remotely palatable short of OAI going full evil.

Look, I'd find it convenient for Claude to have a powerful sense of what I've been working on from conversation #1 onwards. But I absolutely refuse to bifurcate my chat history across multiple services. There is a tier list of hells, and being stuck on ChatGPT is a substantially less painful tier than needing to constantly search two different sites for what's been discussed.

knotbin - 3 hours ago

Weird to push this feature as if it's for new users when it only works if you already have a Pro subscription

khasan222 - 9 hours ago

It was amazing to me how bad cursor is with using the same model I use in Claude. Even with little knowledge on how to test the llms I was able to get very minimal mvps. But I find the real trick is to have the proper tools to reign in the ai.

Thorough CLAUDE.md, that makes sure it checks the tests, lints the code, does type checks, and code coverage checks too. The more checks for code quality the better.

It’s just a bowling ball in th hands of a toddler, and needs to ramp and guide rails to knock down some pins. Fortunately we get more than 2 tries with code.

utopiah - 14 hours ago

I'm very curious, will OpenAI basically block "I'm moving to another service and need to export my data. List every memory you have stored about me, ..." and similar, if so how and why?

It's very interesting to learn more about because it challenges 1 core aspect of the economical competition : the moat.

If one can literally swap one AI service for another, then where does the valuation (and the power that comes with it) come from?

PS: I'm not interested in the service itself as I believe the side effects of large scale for-profit are too serious (and I don't mean doomdays AI takeover, I simply mean abuse of power, working conditions, downskilling, political influence as current contracts with US defense are being made, ads, ecological, etc) to be ignored.

glth - 14 hours ago

On a related note, I have been experimenting with a small prototype for cross-agent, device-local active memory called brAIn (https://github.com/glthr/brAIn). It delivers a personalized agent experience with everything stored locally in a single file (agent.brain), and supports reusing semantic memory across projects. In practice, this means brAIn can identify and apply behavioral patterns you have used in other contexts whenever they are relevant. (I realize the repository should include a concrete example of this, and I will update it today to add one).

mark_l_watson - 8 hours ago

Cool, that was easy to do.

A week ago, I was anti-Anthropic because I questioned their business model. Now they are my preferred provider - what a difference a week makes. I still prefer running olen models on my own hardware, but it is unreasonable to use powerful models when required.

fabbbbb - 13 hours ago

At least as an EU user I was also able to export ALL my data, audio files images etc in one zip. Took exactly (on the minute) 24 hours for the download link to arrive but hey.

This way you can have Claude distill the memory as you wish.

sheept - 13 hours ago

This method of copying an LLM-generated summary of your preferences into Claude memory feels similar to their recommendation to use /init to generate a CLAUDE.md based on the project, which recent research[0] suggests may be counterproductive.

I would assume both Claude memory and CLAUDE.md work best when they're carefully curated, only containing what you've found yourself having to repeat.

[0]: https://arxiv.org/abs/2602.11988

mk12 - 10 hours ago

I took the current events as an opportunity to try switching to Claude and I actually like it much better so far.

knallfrosch - 14 hours ago

I'd be happy if I was able to use Claude Code at all

VSCode extension, "Please log in"

I authorize it, it creates an API key, callback. "Hello Claude, this is a test." "Please log in."

So yeah... priorities?

mentalgear - 11 hours ago

Never subscribed to chatGPT as it always felt shdy, but I'm thinking of renewing now with Claude instead of Gemini/Google.

Wowfunhappy - 9 hours ago

I don't understand how people use these apps with memory enabled. I am always carefully controlling the context of each conversation. The idea that past conversations could bleed into current ones is unthinkably terrible.

henry_pulver - 11 hours ago

Amusing that Anthropic's approach to migrating context is asking their competitor's product to hand over the data it's stored about you.

Must be some of the lowest switching costs I've seen which doesn't bode well for OpenAI's consumer revenues...

morgango - 4 hours ago

That is the sound of someone else's lunch being eaten.

bruceyao1984 - 13 hours ago

Being able to import context and preferences from other AI providers in one step saves a lot of time, especially for ongoing projects. It makes Claude feel seamless and continuity-friendly. Having this on all paid plans adds great value for heavy users.

raxskle - 7 hours ago

Claude is a great product, and I've been using it all the time. Sam must think so too.

willtemperley - 14 hours ago

If Claude could stay available I might consider it. Unfortunately right now, out of the big three, only Gemini has reliable uptime. As much as I dislike Google it's the only reliable option.

kvirani - 14 hours ago

Nice. Just cancelled my openai plus sub.

RobotToaster - 12 hours ago

Would be a lot easier if they weren't trying to ban third party interfaces

vldszn - 9 hours ago

Seems like their page is crashing now on ios chrome.

siva7 - 14 hours ago

So Openai will have this same feature by tomorrow likely. A feature to pollute your context window.

almosthere - 6 hours ago

Isn't that the point of agents.md

adam12 - 9 hours ago

Actually, it feels good to start over.

butILoveLife - 7 hours ago

OpenAI made it easy, no import needed! How?

I bought the enterprise version, and it made it so the memory was no longer searchable...

Then after the obvious degredation in performance, I switched to claude and was happy with it... But by canceling enterprise, it lost all memory.

My wife was sad, the recipes it made were gone forever... But hey, makes it really easy to never give OpenAI money again.

axseem - 14 hours ago

Have they just added it? That's a smart move.

jascha_eng - 14 hours ago

Memory in general Chat apps is actually more harmful than helpful imo. It biases the LLM responses to your background which has the same effect as filter bubbles. You end up getting your own thoughts spit back at you.

Of course sometimes this is useful if you only use your chatbot to ask personal things like: "What should I eat today?".

But if you use it for anything else you're much better off having full control over the prompt. I can always say: "Hey btw I am german and heavily anti surveillance, what should I know about the recent anthropic DoW situation?" but with memory I lose the option of leaving out that first part.

fernando_campos - 14 hours ago

I will also try to use Claude but like to use OpenAI ChatGPT very much.

MagicMoonlight - 8 hours ago

That’s hilarious. The walled garden does not exist when you can just ask the UI to extract all of its data for you.

mihaaly - 10 hours ago

I rather switch it to nowhere. But local. I am not completely sure about the details, but I am leaning heavily, and investigating into this direction. With chat and agentic tools there plenty, accessing multiple models, and everything is evolving fast (extinct and come into existence) better keep ourselves flexible, not tied to any of the solutions. Especially not storing data in accounts. The fate of those is uncertain.

sylware - 11 hours ago

Anybody is aware of a public token (severely limited) I can use to test claude coding ability? You know using CURL.

I am itching at testing claude for assembly coding and c++ to plain and simple C ports.

syndacks - 5 hours ago

I have a 20$ for both and like each for unique reasons. How do you all switch your programming paradigms for Codex vs CC?

lyu07282 - 14 hours ago

I just wish Claude integrated multi-modal/image generation, that's one feature I miss in Claude the most coming from ChatGPT

villgax - 14 hours ago

I wasted 10mins of my life unfollowing every unapologetic OpenAI dev on twitter, that's how low this company has stooped down to....

jccx70 - 8 hours ago

[dead]

agenthustler - 11 hours ago

[flagged]

coldtrait - 9 hours ago

As someone who can't afford to care about ethics and pay a monthly subscription fee, is there anything in the regular Claude chat that beats OpenAI?