Zedless: Zed fork focused on privacy and being local-first
github.com427 points by homebrewer 10 hours ago
427 points by homebrewer 10 hours ago
I'm glad to see this. I'm happy to plan to pay for Zed - its not there yet but its well on its way - But I don't want essentially _any_ of the AI and telemetry features.
The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).
- I can kick out some money for a settings sync service. - I can kick out some money to essentially "subscribe" for maintenance.
I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....
I'm happy to finally see this take. I've been feeling pretty left out with everyone singing the praises of AI-assisted editors while I struggle to understand the hype. I've tried a few and it's never felt like an improvement to my workflow. At least for my team, the actual writing of code has never been the problem or bottleneck. Getting code reviewed by someone else in a timely manner has been a problem though, so we're considering AI code reviews to at least take some burden out of the process.
AI code reviews are the worst place to introduce AI, in my experience. They can find a few things quickly, but they can also send people down unnecessary paths or be easily persuaded by comments or even the slightest pushback from someone. They're fast to cave in and agree with any input.
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
Agreed.
LLM’s are fundamentally text generators, not verifiers.
They might spot some typos and stylistic discrepancies based on their corpus, but they do not reason. It’s just not what the basic building blocks of the architecture do.
In my experience you need to do a lot of coaxing and setting up guardrails to keep them even roughly on track. (And maybe the LLM companies will build this into the products they sell, but it’s demonstrably not there today)
> LLM’s are fundamentally text generators, not verifiers.
In reality they work quite well for text and numeric (via tools) analysis, too. I've found them to be powerful tools for "linting" a codebase against adequately documented standards and architectural guidance, especially when given the use of type checkers, static analysis tools, etc.
The value of an analysis is the decision that will be taken after getting the result. So will you actually fix the codebase or it’s just a nice report to frame and put on the wall?
> So will you actually fix the codebase…
Code quality improvements is the reason to do it, so *yes*. Of course, anyone using AI for analysis is probably leveraging AI for the "fix" part too (or at least I am).
That's a fantastic counterpoint. I've found AI reviewers to be useful on a first pass, at a small-pieces level. But I hear your opinion!
I find the summary that copilot generates is more useful than the review comments most of the time. That said, I have seen it make some good catches. It’s a matter of expectations: the AI is not going to have hurt feelings if you reject all its suggestions, so I feel even more free to reject it feedback with the briefest of dismissals.
IMO, the AI bits are the least interesting parts of Zed. I hardly use them. For me, Zed is a blazing fast, lightweight editor with a large community supporting plugins and themes and all that. It's not exactly Sublime Text, but to me it's the nearest spiritual successor while being fully GPL'ed Free Software.
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
> while being fully GPL'ed Free Software
I wouldn't sing them praises for being FOSS. All contributions are signed away under their CLA which will allow them to pull the plug when their VCs come knocking and the FOSS angle is no longer convenient.
How is this true if it’s actually GPL as gp claimed?
The CLA assigns ownership of your contributions to the Zed team[^0]. When you own software, you can release it under whatever license you want. If I hold a GPL license to a copy, I have that license to that copy forever, and it permits me to do all the GPL things with it, but new copies and new versions you distribute are whatever you want them to be. For example Redis relicensed, prompting the community to fork the last open-source version as Valkey.
The way it otherwise works without a CLA is that you own the code you contributed to your repo, and I own the code I contributed to your repo, and since your code is open-source licensed to me, that gives me the ability to modify it and send you my changes, and since my code is open-source licensed to you, that gives you the ability to incorporate it into your repo. The list of copyright owners of an open source repo without a CLA is the list of committers. You couldn't relicense that because it includes my code and I didn't give you permission to. But a CLA makes my contribution your code, not my code.
[^0]: In this case, not literally. You instead grant them a proprietary free license, satisfying the 'because I didn't give you permission' part more directly.
Because when you sign away copyright, the software can be relicensed and taken closed source for all future improvements. Sure, people can still use the last open version, maybe fork it to try to keep going, but that simply doesn’t work out most times. I refuse to contribute to any project that requires me to give them copyright instead of contributing under copyleft; it’s just free contractors until the VCs come along and want to get their returns.
The FSF also typically requires a copyright assignment for their GPL code. Nobody thinks that they’ll ever relicense Emacs, though.
It has been decades since I've seen an FSF CLA packet, but if I recall correctly, the FSF also made legally-binding promises back to the original copyright holder, promising to distribute the code under some kind of "free" (libre, not gratuit) license in the future. This would have allowed them to switch from GPL 2 to GPL 3, or even to an MIT license. But it wouldn't have allowed them to make the software proprietary.
But like I said, it has been decades since I've seen any of their paperwork, and memory is fallible.
yeah I don't mind signing a CLA for copyleft software to a non-profit org, but i do with a for-profit one.
In my opinion, it's not. They could start licensing all new code under a non-FOSS license tomorrow and we'd still have the GPL'ed Zed as it is today. The same is true for any project, CLA or not.
why not just use sublime text?
I found the OP comment amusing because Emacs with a Jetbrains IDE when I need it is exactly my setup. The only thing I've found AI to be consistently good for is spitting out boring boilerplate so I can do the fun parts myself.
Highlighting code and having cursor show the recommended changes and make them for me with one click is just a time saver over me copying and pasting back and forth to an external chat window. I don’t find the autocomplete particularly useful, but the inbuilt chat is a useful feature honestly.
I'm the opposite. I held out this view for a long, long time. About two months ago, I gave Zed's agentic sidebar a try.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
> Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases,
That's a red flag for me. Having a lot of tests usually means that your domain is fully known so now you can specify it fully with tests. But in a lot of setting, the domain is a bunch of business rules that product decides on the fly. So you need to be pragmatic and only write tests against valuable workflows. Or find yourself changing a line and have 100+ tests breaking.
If you can write tests fast enough, you can specify those business rules on the fly. The ideal case is that tests always reflect current business rules. Usually that may be infeasible because of the speed at which those rules change, but I’ve had a similar experience of AI just getting tests right, and even better, getting tests verifiably right because the tests are so easy to read through myself. That makes it way easier to change tests rapidly.
This also is ignoring that ideally business logic is implemented as a combination of smaller, stabler components that can be independently unit tested.
Unit tests value is mostly when integration and more general tests are failing. So you can filter out some sections in the culprit list (you don’t want to spend days specifying the headlights if the electric design is wrong or the car can’t start)
Having a lot of tests is great until you need to refactor them. I would rather have a few e2e for smoke testing and valuable workflows, Integration tests for business rules. And unit tests when it actually matters. As long as I can change implementation details without touching the tests that much.
Code is a liability. Unless you don’t have to deal with (assembly and compilers) reducing the amount of code is a good strategy.
AI is solid for kicking off learning a language or framework you've never touched before.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
But so is a good book. And it costs way less. Even though searching may be quicker, having a good digest of a feature is worth the half hour I can spend browsing a chapter. It’s directly picking an expert brains. Then you take notes, compare what you found online and the updated documentation and soon you develop a real understanding of the language/tool abstraction.
In an ideal world, yeah. But most software instructional docs and books are hot garbage, out of date, incorrect, incomplete, and far too shallow.
Are you reading all the books on the market? You can find some good recommendation lists. No need to get every new releases from Packtpub.
I knew you were up to jab Packt XD I have yet to find a good book from Packt it may be exist. My fav publishers are manning and nostarch press
I’m using Go to build a high performance data migration pipeline for a big migration we’re about to do. I haven’t touched Go in about 10 years, so AI was helpful getting started.
But now that I’ve been using it for a while it’s absolutely terrible with anything that deals with concurrency. It’s so bad that I’ve stopped using it for any code generation and going to completely disable autocomplete.
AI has stale knowledge I won't use it for learning, especially because it's biased towards low quality JS repos on which has been trained on
A good example would be Prometheus, particularly PromQL for which the docs are ridiculously bare, but there is a ton of material and stackoverflow answers scattered al over the internet.
zed was just a fast and simple replacement for Atom (R.I.P) or vscode. Then they put AI on top when that showed up. I don't care for it, and appreciate a project like this to return the program to its core.
You can opt out of AI features in Zed [0].
Opt-out instead of opt-in is an anti-feature.
You can leave LLM Q&A on the table if you like, but tab auto complete is a godlike power.
I'm auto-completing crazy complex Rust match branches for record transformation. 30 lines of code, hitting dozens of fields and mutations, all with a single keystroke. And then it knows where my next edit will be.
I've been programming for decades and I love this. It's easily a 30-50% efficiency gain when plumbing fields or refactoring.
Can't you just not use / disable AI and telemetry? It's not shoved in your face.
I would prefer an off-by-default telemetry, but if there's a simple opt-out, that's fine?
Well said, Zed could be great if they just stopped with the AI stuff and focused on text editing.
I think you and I are having very different experiences with these copilot/agents. So I have questions for you, how do you:
- generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?
- get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?
- convert an idea or plan.md or a paper into working code?
- Fix flakes, fix test<->code discrepancies or increase coverage etc
If you do all this manually, why?
> generate new modules/classes in your projects
If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).
> integrate module A into module B
If it's cannot be done easily, that's the sign of a less than optimal API.
> entire codebase A into codebase B
Is that a real need?
> get someones github project up and running on your machine, do you manually fiddle with cmakes and npms
If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.
> convert an idea or plan.md or a paper into working code?
Iteratively. First have an hello world or something working, then mowing down the task list.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.
> If you do all this manually, why?
Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.
To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.
i'm an undergrad, so when i need to implement a paper, the idea is that i'm supposed to learn something from implementing it. i feel fortunate in that ai is not yet effective enough to let me be lazy and skip that process, lol
When I was younger, we all had to memorize phone numbers. I still remember those numbers (even the defunct ones) but I haven't learned a single new number since getting a cellphone.
When I was younger, I had to memorize how to drive to work/the grocery store/new jersey. I still remember those routes but I haven't learned a single new route since getting a smartphone.
Are we ready to stop learning as programmers? I certainly am not and it sounds like you aren't either. I'll let myself plateau when I retire or move into management. Until then, every night debugging and experimenting has been building upon every previous night debugging and experimenting, ceaselessly progressing towards mastery.
I can largely relate... that said, I rarely rely on my phone for remembering routes to places I've been before. It does help that I've lived in different areas of my city and suburbs (Phoenix) so I'm generally familiar with most of the main streets, even if I haven't lived on a given side of town in decades.
The worst is when I get inclined to go to a specific restaurant I haven't been to in years and it's completely gone. I've started to look online to confirm before driving half an hour or more.
I noticed this also, and ever since, I've made it a point to always have memorized my SO's number and my best friend's number.
Drawing blueprints is more enjoyable than putting up drywall.
The code is the blueprint.
“The final goal of any engineering activity is some type of documentation. When a design effort is complete, the design documentation is turned over to the manufacturing team. This is a completely different group with completely different skills from the design team. If the design documents truly represent a complete design, the manufacturing team can proceed to build the product. In fact, they can proceed to build lots of the product, all without any further intervention of the designers. After reviewing the software development life cycle as I understood it, I concluded that the only software documentation that actually seems to satisfy the criteria of an engineering design is the source code listings.” - Jack Reeves
depends. if i am converting it to then use it in my project, i don't care who writes it, as long as it works.
*Outsourcing to a parrot on steroids which will make mistakes, produce stale ugly ui with 100px border radius, 50px padding and rainbow hipster shadows, write code biased towards low quality training data and so on. It's the perfect recipe for disaster
Over the top humor duly acknowledged.
Disastrous? Quite possibly, but my concerns are based on different concerns.
Almost everything changes, so isn’t it better to rephrase these statements as metrics to avoid fixating on one snapshot in an evolving world?
As the metrics get better, what happens? Do you still have objections? What objections remain as AI capabilities get better and better without limit? The growth might be slow or irregular, but there are many scenarios where AIs reach the bar where they are better at almost all knowledge work.
Stepping back, do you really think of AI systems as stochastic parrots? What does this metaphor buy you? Is it mostly a card you automatically deal out when you pattern match on something? Or does serve as a reusable engine for better understanding the world?
We’ve been down this road; there is already much HN commentary on the SP metaphor. (Not that I recommend HN for this kind of thing. This is where I come to see how a subset of tech people are making sense of it, often imperfectly with correspondingly inappropriate overconfidence.)
TLDR: smart AI folks don’t anchor on the stochastic parrots metaphor. It is a catchy phrase and helped people’s papers get some attention, but it doesn’t mean what a lot of people think it means. Easily misunderstood, it serves as a convenient semantic stop sign so people don’t have to dig in to the more interesting aspects of modern AI systems. For example: (1) transformers build conceptual models of language that transcend any particular language. (2) They also build world models with spatial reasoning. (3) Many models are quite resilient to low quality training data. And more.
To make this very concrete: under the assumption of universal laws of physics, people are just following the laws of physics, and to a first approximation, our brains are just statistical pattern matchers. By this definition, humans would also be “stochastic parrots”. I go all this trouble to show that this metaphor doesn’t cut to the heart of the matter. There are clearer questions to ask: they require getting a lot more specific about various forms and applications of intelligent behavior. For example
- under what circumstances does self play lead to superhuman capability in a particular domain?
- what limits exist (if any) in the self supervised training paradigm used for sequential data? If the transformer trained in this way can write valid programs then it can create almost any Turing machine; limited only by time and space and energy. What more could you want? (Lots, but I’m genuinely curious as to people’s responses after reflecting on these.)
Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work. It doesn't learn our APIs, it doesn't learn our business domain, it doesn't learn from the countless mistakes I correct it on. What we have now is interesting, it is helping sometimes and wasteful others. It is not intelligent.
LLMs ARE stochastic parrots, throw whatever chatgpt slop answer but facts are facts
I'm pretty fast coding and know what I'm doing. My ideas are too complex for claude to just crap out. If I'm really tired I'll use claude to write tests. Mostly they aren't really good though.
AI doesn't really help me code vs me doing it myself.
AI is better doing other things...
> AI is better doing other things...
I agree. For me the other things are non-business logic, build details, duplicate/bootstrap code that isn't exciting.
> how do you convert a paper into working code?
this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?
I have found that whenever it fails for me, it's likely that I was trying to one-shot the solution and I retry by breaking the problem into smaller chunks or doing a planning work with gemini cli first.
smaller chunks works better, but ime, it takes as long as writing it manually that way, unless the chunk is very simple, e.g. essentially api examples. i tend not to use LLMs for planning because thats the most fun part for me :)
For stuff like adding generating and integrating new modules: the helpfulness of AI varies wildly.
If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.
Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.
Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.
If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth
> generate new modules/classes in your projects
I type:
class Foo:
or: pub(crate) struct Foo {}
> integrate module A into module BWhat do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.
> get someones github project up and running on your machine
docker
> convert an idea or plan.md or a paper into working code
I sit in front of a keyboard and start typing.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
I sit in front of a keyboard, read, think, and then start typing.
> If you do all this manually, why?
Because I care about the quality of my code. If these activities don't interest you, why are you in this field?
> If these activities don't interest you, why are you in this field?
I am in this field to deliver shareholder value. Writing individual lines of code; unless absolutely required, is below me?
Ah well then, this is the cultural divide that has been forming since long before LLMs happened. Once software engineering became lucrative, people started entering the field not because they're passionate about computers or because they love the logic/problem solving but because it is a high paying, comfortable job.
There was once a time when only passionate people became programmers, before y'all ruined it.
i think you are mis-categorizing me. i have been programming for fun since i was a kid. But that doesn't mean i solve mundane boring stuff even though i know i can get someone else or ai to figure those parts out so i can do the fun stuff.
Ah perhaps. Then I think we had different understandings of my "why are you in this field?" question. I would say that my day job is to "deliver shareholder value"[0] but I'd never say that is why I am in this field, and it sounds like it isn't why you're in this field either since I doubt you were thinking about shareholders when you were programming as a kid.
[0] Actually, I'd say it is "to make my immediate manager's job easier", but if you follow that up the org chart eventually it ends up with shareholders and their money.
Every human who defines the purpose of their life's work as "to deliver shareholder value" is a failure of society.
How sad.
To do those things, I do the same thing I've been doing for the thirty years that I've been programming professionally: I spend the (typically modest) time it takes to learn to understand the code that I am integrating into my project well enough to know how to use it, and I use my brain to convert my ideas into code. Sometimes this requires me to learn new things (a new tool, a new library, etc.). There is usually typing involved, and sometimes a whiteboard or notebook.
Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).
As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:
1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"
2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.
3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.
4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.
Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.
So that's my $0.02!
> I can kick out some money to essentially "subscribe" for maintenance.
People on HN and other geeky forums keep saying this, but the fact of the matter is that you're a minority and not enough people would do it to actually sustain a product/company like Zed.
It's a code editor so I think the geeky forums are relevant here.
Also, this post is higher on HN than the post about raising capital from Sequoia where many of the comments are about how negatively they view the raising of capital from VC.
The fact of the matter is that people want this and the inability of companies to monetize on that desire says nothing about whether the desire is large enough to "actually sustain" a product/company like Zed.
"Happy to see this". The folks over at Zed did all of the hard work of making the thing, try to make some money, and then someone just forks it to get rid of all of the things they need to put in to make it worth their time developing. I understand if you don't want to pay for Zed - but to celebrate someone making it harder for Zed to make money when you weren't paying them to begin with -"Happy to PLAN to pay for Zed"- is beyond.
> I understand if you don't want to pay for Zed
But he does say he does want to pay!
I always have mixed feelings about forks. Especially the hard ones. Zed recently rolled out a feature that lets you disable all AI features. I also know telemetry can be opted out. So I don’t see the need for this fork. Especially given the list of features stated. Feels like something that can be upstreamed. Hope that happens
I remember the Redis fork and how it fragmented that ecosystem to a large extent.
I'd see less need for this fork if Zed's creators weren't already doing nefarious things like refusing to allow the Zed account / sign-in features to be disabled.
I don't see a reason to be afraid of "fragmented ecosystems", rather, let's embrace a long tail of tools and the freedom from lock-in and groupthink they bring.
For what they provide, for free, I'd say refusing to disable login is not "nefarious". They need to grow a business here.
They need to make money for their investors. Once you start down the enshittification path, forever will it dominate your destiny.
Well there's features within Zed that are part of the account / sign-in process, so it might be a bit more effort to just "simply comment out login" for an editor that is as fast and smooth as Zed, I dont care that its there as long as they dont force it on me, which they don't.
I have this take, too. I tried to show how valuable this is to me via github issue, but the lack of an answer is pretty clearly a "don't care."
I'm one of the people interested in Zed for the editor tech but disheartened with all the AI by default stuff.
opt-out is not enough, specially in a program where opt-out happens via text-only config files.
I can never know if I've correctly opted out of all the things I don't want.
What interests you about Zed that is not already covered by Sublime?
For me, it's always interesting to try out new editors, and I've been a little frustrated with Sublime lately.
Upsides of Zed (for me, I think):
* Built-in AI vibecodery, which I think is going to be an unavoidable part of the job very soon.
* More IDE features while still being primarily an Editor.
* Extensions in Rust (if I'm gonna suffer, might as well learn some Rust).
* Open source.
Downsides vs Sublime:
* Missing some languages I use.
* Business model, arguably, because $42M in VC "is what it is."
This is why we shouldn't open source things.
All of that hard work, intended to build a business, and nobody is happy.
Now there's a hard fork.
This is shitty.
Even opt-in telemetry makes me feel uncomfortable. I am always aware that the software is capable of reporting the size of my underwear and what I had for breakfast this morning at any moment, held back only by a single checkbox. As for the other features, opt-out stuff just feels like a nuisance, having to say "No, I don't want this" over and over again. In some cases it's a matter of balance, but generally I want to lean towards minimalism.
What makes me uncomfortable is that people with your opinion have to defend their position.
I think your thinking is common sense.
I'm not particularly attached to this position. I just don't believe in a world where interests don't collide and often the person doing more should probably have a better say in things. If we built the product, we get to dictate some of these privacy features by default.
But giving users an escape hatch is something that people take for granted. I'd understand all these furor if there was no such thing.
Besides, I reckon Zed took a lot of resources to build and maintain. Help them recoup their investment
Not to mention Zed is already open source. I guess the best thing Zed can do is make it all opt-in by default, then this fork is rendered useless.
It's nice to have additional assurance that the software won't upload behind your back on first startup. Though I also run opensnitch, belt and suspenders style.
Bit premature to post this, especially without some manifesto explaining the particular reason for this fork. The "no rugpulls" implies something happened with Zed, but you can't really expect every HN reader to be in the loop with the open source controversy of the week.
Contributor Agreements are specifically there for license rug-pulls, so they can change the license in the future as they own all the copyrights. So the fact that they have a CA means they are prepping for a rug-pull and thus this bullet point.
I can’t speak for Zed’s specific case, but several years ago I was part of a project which used a permissive license. I wanted to make it even more permissive, by changing it to one of those essentially-public-domain licenses. The person with the ultimate decision power had no objections and was fine with it, but said we couldn’t do that because we never had Contributor License Agreements. So it cuts both ways.
It's reasonable for a contributor to reject making their code available more permissively
Of course. Just like it is reasonable for them to reject the reverse. It is reasonable for them to reject any change, which is the point.
You seem to be assuming that a more permissive license is good. I don't believe this is true. Linux kernel is a great example of a project where going more permissive would be a terrible idea.
I’m not sure where this belief came from, or why the people who believe it feel so strongly about it, but this is not generally true.
With the exception of GPL derivatives, most popular licenses such as MIT already include provisions allowing you to relicense or create derivative works as desired. So even if you follow the supposed norm that without an explicit license agreement all open source contributions should be understood to be licensed by contributors under the same terms as the license of the project, this would still allow the project owners to “rug pull” (create a fork under another license) using those contributions.
But given that Zed appears to make their source available under the Apache 2.0 license, the GPL exception wouldn’t apply.
Indeed, if you discount all the instances where it is true, it is not true.
From my understanding, Zed is GPL-3.0-or-later. Most projects that involve a CLA and have rugpull potential are licensed as some GPL or AGPLv3, as those are the licenses that protect everyone's rights the strongest, and thanks to the CLA trap, the definition of "everyone" can be limited to just the company who created the project.
https://github.com/zed-industries/zed/blob/main/crates/zed/C...
Good catch on the license in that file. I went by separate documents in the repo that said the source is available “under the licenses documented in the repository”, and took that to mean at-choice use of the license files that were included.
I think the caveat to the claim that CLAs are only useful for rug pulls still important, but this is a case where it is indeed a relevant thing to consider.
CA means: this is not just a hobby project, it's a business, and we want to retain the power to make business decisions as we see fit.
I don't like the term "rug-pull". It's misleading.
If you have an open source version of Zed today, you can keep it forever, even if future versions switch to closed source or some source-available only model.
If you build a product and a community around a certain set of values, and then you completely swap value systems its a rug pull. They build a user base by offering something they don't intend to continue offering. What the fuck else do you want to call it?
CLAs represent an important legal protection, and I would never accept a PR from a stranger, for something being developed in public, without one. They're the simplest way to prove that the contributor consented to licensing the code under the terms of the project license, and a CYA in case the contributed code is e.g. plagiarized from another party.
(I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)
I upvoted your comment. I share your view and just wanted to say you're not the only one who thinks this way.
I think the proper way to do this would be a DCO. https://developercertificate.org/
DCOs only document that the contributor has the right to contribute the code, not the license under which they contribute it. CLAs do both.
Are you suggesting the FSF has a copyright assignment for the purposes of “rug pulls”?
Yes.
The FSF requires assignment so they can re-license the code to whatever new license THEY deem best.
Not the contributors.
A CLA should always be a warning.
IANAL but their official reason for the CLA seems pretty reasonable to me: https://www.gnu.org/licenses/why-assign.en.html
tl;dr: If someone violates the GPL, the FSF can't sue them on your behalf unless they are a copyright holder.
(personally I don't release anything under virus licenses like the GPL but I don't think there's a nefarious purpose behind their CLA)
> If someone violates the GPL, the FSF can't sue them on your behalf unless they are a copyright holder.
This seems to be factually untrue; you can assign specific rights under copyright (such as your right to sue and receive compensation for violations by third parties) without assigning the underlying copyright. Transfer of the power to relicense is not necessary for transfer of the power to sue.
It was, some see the GPL2->GPL3 as a rug-pull... but it doesn't matter today as the FSF stopped requiring CAs back in 2021.
That's a harder argument to make given the "or later" clause was the default in the GPLv2, and also optional.
Zed is quite well known to be heavily cloud- and AI-focused, it seems clear that's what's motivating this fork. It's not some new controversy, it's just the clearly signposted direction of the project that many don't like.
Seems like it might be reacting to or fanned to flame by: https://github.com/zed-industries/zed/discussions/36604
That's not a rug pull, that's a few overly sensitive young 'uns complaining
overly sensitive to what?
"You're doing business with someone whose views I dislike" is not harassment, nor do I believe that the person who opened the issue is arguing in good faith. The world is full of people with whom I disagree (often strongly) on matters of core values, and I work with them civilly because that is what a mature person does. Unless the VC firm starts pushing Zed to insert anti-Muslim propaganda into their product, or harassing the community, there is no reasonable grounds to complain about the CoC.
I don't agree that it is immature or overly sensitive. The issue basically says:
> Hey, you look to be doing business with someone who publicly advocates for harming others. Could you explain why and to what extend they are involved?
"doing business with someone whose views I dislike" is slightly downplaying the specific view here.
I think that the formulation you gave is precisely "doing business with someone whose views I dislike". It assumes much that simply should not be assumed, to wit:
* That this man actually advocates for harming others, versus advocating for things that the github contributor considers tantamount to harming others
* That his personal opinions constitute a reason to not do business with a company he is involved with
* That Zed is morally at fault if they do not agree that this man's personal opinions constitute a reason to not do business with said company
I find this kind of guilt by association to be detestable. If Zed wishes to do business with someone whom I personally would not do business with for moral reasons, that does not confer some kind of moral stain on them. Forgiveness is a virtue, not a vice. Not only that, but this github contributor is going for the nuclear option by invoking a public shaming ritual upon Zed. It's extremely toxic behavior, in my opinion.
>The issue basically says:
I don't think any of the evidence shown there demonstrates "advocacy for harming others". The narrative on the surely-unbiased-and-objective "genocide.vc" site used as a source there simply isn't supported by the Twitter screencaps it offers.
This also isn't at all politely asking "Could you explain why and to what extend they are involved?" It is explicitly stating that the evidenced level of involvement (i.e.: being a business partner of a company funding the project) is already (in the OP's opinion) beyond the pale. Furthermore, a rhetorical question is used to imply that this somehow deprives the Code of Conduct of meaning. Which is absurd, because the project Code of Conduct doesn't even apply to Sequoia Capital, never mind to Shaun Maguire.
The issue also cites the New York times. Here is an archive: https://archive.is/6VoyD You can read the quote for your self here https://x.com/shaunmmaguire/status/1941135110922969168 there is no question about the fact that this is racist speech, that builds up on a racist stereotype. Many of Zed’s contributors are no doubt Muslims, whom Shaun Maguire is being racist against here.
Zed’s leadership does have to answer for why they invited people like that to become a part of Zed’s team.
Yet they post this on Github, which apparently isn't a problem for themselves or the code of conduct despite Microsoft having ties with the Israeli military.
Boycotting a text editor because the company that makes it accepted funding from another company that has a partner who holds controversial views on a conflict in Gaza where children are killed is going a bit far I think.
In a perfect world, children don't get killed, but with that many levels of indirection, I don't think there is anything in this world that is not linked to some kind of genocide or other terrible things.
It should be relatively easy to simply not accept money from companies such as these. Accepting this money is a pretty damning moral failure.
I don't have a startup, but not accepting $32M doesn't seem particularly easy to me.
I am sure plenty of people here know these things, this is Y Combinator after all, but to me, the general idea in life is that getting money is hard, and stories that make it look easy are scams or extreme outliers.
We clearly disagree here, but be that as it may, Zed’s contributors are obviously outraged at this, and I argue that this outrage is justifiable. The amount of money you accept from reprehensible people is usually pretty strongly correlated with the amount of people who’ll look down on you for doing so.
> Zed’s contributors are obviously outraged at this
Do you have an example of that? I can't find any contributors that are upset about this aspect of the funding