Zig quits GitHub, says Microsoft's AI obsession has ruined the service
theregister.com446 points by Brajeshwar 5 hours ago
446 points by Brajeshwar 5 hours ago
The edit history of the announcement is quite a ride:
> [2025-11-27T02:10:07Z] it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining losers eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress [1]
> [2025-11-27T14:04:47Z] it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining rookies eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress [2]
> [2025-11-28T09:21:12Z] it’s abundantly clear that the engineering excellence that created GitHub’s success is no longer driving it [3]
---
1: https://web.archive.org/web/20251127021007/https://ziglang.o...
2: https://web.archive.org/web/20251127140447/https://ziglang.o...
3: https://web.archive.org/web/20251128092112/https://ziglang.o...
On the previous HN article, I recall many a comment talking about how they should change this, leave the politics/negative juju out because it was a bad look for the Zig community.
It would appear they listened to that feedback, swallowed their ego/pride and did what was best for the Zig community with these edits. I commend them for their actions in doing what's best for the community at the cost of some personal mea culpa edits.
I often find we don't appreciate enough people accepting their failures and changing their mind. For some reason I see the opposite: people respecting those who "stick to their guns" or double down when something is clearly wrong. As you say, the context matters and these edits seem to be learning from the feedback rather than saving face since the sentiment stands, just in a less needlessly targeted way.
Never understood that either. If someone was wrong and bad, and now they're trying to do right and good, we need to celebrate that. Not just because that's awesome in itself, but also to give the opportunity and incentives for others in the future to do better.
If everyone is always bad regardless if they're trying to change, what incentives would they have from changing at all? It doesn't make any sense.
Its a thing with (online) culture - no matter what you do you're going to ruffle some feathers.
If no one hates what you are doing chances are you're not doing anything really
They should know that crap software is rarely intentional as they make it out to be in the initial version of the text, what you get is what they are able to build in the environment they are in (that matters too). Capability and environment.
to quote something I said a day ago about AI spotting in the posts of other people:
https://news.ycombinator.com/item?id=46114083
"I think that writing style is more LinkedIn than LLM, the style of people who might get slapped down if they wrote something individual.
Much of the world has agreed to sound like machines."
This has always been the case in the "corporate/professional" world imo.
It's just much easier now for "laypeople" to also adjust their style to this. My prediction is people will get quickly tired of it (as evidenced by your comment)
At least he edited it to something more palatable. I vastly prefer someone who can admit to making a mistake and amending what they said to someone who doubles down. The latter attitude has become far too normalised in the last few years.
Is political correctness necessary to have a thriving community / open source project?
Linux seems to be doing fine.
I wouldn't personally care either way but it is non-obvious to me that the first version would actually hurt the community.
>Is political correctness necessary to have a thriving community / open source project?
Not at all, but this reads like childishness rather than political correctness.
How you treat others says everything about you and nothing about the other person.
In this case, the unnecessary insults detract from the otherwise important message, and reflect poorly on Zig. They were right to edit it.
Linus famously was quite strict and cursed quite a bit when somebody pissed him off with stupidity.
Even Linus doesn’t act that way anymore. Here’s him a few years ago:
> This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for.
> Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry. The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.
> I am going to take time off and get some assistance on how to understand people's emotions and respond appropriately.
He took time off and he’s better now. What you call “political correctness” is what I and others call “basic professionalism”. It took Linus 25 years to understand that. I can only hope that the people who hero worshipped him and adopted a similar attitude can also mature.
Not calling other software engineers 'losers' is not about political correctness. They're "losers" because they take their product on a path you don't like? Come on. Linus can be emotional in his posts because Linux is his "child".
that attitude has and continues to approach a entire bloodless coup of the largest economy on the planet.
The normalization, in fact, has been quite successful. The entire silicon valley has tacitly approved of it.
You act like people arn't being rewarded for this type of behavior.
Reads like an official White House statement[0].
[0] https://www.whitehouse.gov/articles/2025/03/yes-biden-spent-...
Also
> More importantly, Actions is created by monkeys ...
vs
> Most importantly, Actions has inexcusable bugs ...
I commend the author for correcting their mistakes. However, IMHO, an acknowledgement instead of just a silent edit would have been better.
Anyway, each to their own, and I'm happy for the Zig community.
IMHO, the main advantage of github is that it is an ecosystem. This is a well-thought-out Swiss knife: a pioneering (but no longer new) PR system, convenient issues, as well as a well-formed CI system with many developed actions and free runners. In addition, it is best to use code navigation simply in a web browser. You write code, and almost everything works effortlessly. Having a sponsorship system is also great, you don't have to search for external donation platforms and post weird links in your profile/repository.
All in one, that's why developers like it so much. The obsession with AI makes me nervous, but the advantages still outweigh, as for me, the average developer. For now.
I don't agree with this at all. I think the reason Github is so prominent is the social network aspects it has built around Git, which created strong network effects that most developers are unwilling to part with. Maintainers don't want to loose their stars and the users don't want to loose the collective "audit" by the github users.
Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering. Developers are more likely to use a repo that has more stars than its alternatives.
I know that the code should speak for itself and one should audit their dependencies and not depend on Github stars, but in practice this is not what happens, we rely on the community.
I would say that your comment is an addition to mine, and I think so too. This is another reason for the popularity of github.
As for me, this does not negate the convenient things that I originally wrote about.
Github became successful long before those 'social media features' were added, simply because it provided free hosting for open source projects (and free hosting services were still a rare thing back in the noughties).
The previous popular free code hoster was Sourceforge, which eventually started its what's now called "enshittifcation phase". Github was simply in the right place at the right time to replace Sourceforge and the rest is history.
You don't need to develop on Github to get this, just mirror your repo.
that's not enough, i still have to engage with contributors on github. on issues and pull requests at a minimum.
> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering.
I hate that this is perceived as generally true. Stars can be farmed and gamed; and the value of a star does not decay over time. Issues can be automatically closed, or answered with a non-response and closed. Numbers of followers is a networking/platform thing (flag your significance by following people with significant follower numbers).
> Developers are more likely to use a repo that has more stars than its alternatives.
If anything, star numbers reflect first mover advantage rather than code quality. People choosing which one of a number of competing packages to use in their product should consider a lot more than just the star number. Sadly, time pressures on decision makers (and their assumptions) means that detailed consideration rarely happens and star count remains the major factor in choosing whether to include a repo in a project.
> Maintainers don't want to loose their stars
??? Seriously?
> All these things are powerful indicators of quality
Not in my experience....
Why are you as surprised?
People don't just share their stargazing plots "for fun", but because it has meaning for them.
> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality
Hahahahahahahahahahahaha...
For what its worth I think star based programming is far better than letting an AI generate your code. It is like community curation, in a way less centralized than an AI and more eyes to catch bugs and run in production.
> a pioneering (but no longer new) PR system
having used gerrit 10 years ago there's nothing about github's PRs that I like more, today.
> code navigation simply in a web browser
this is nice indeed, true.
> You write code, and almost everything works effortlessly.
if only. GHA are a hot mess because somehow we've landed in a local minimum of pretend-YAML-but-actually-shell-js-jinja-python and they have a smaller or bigger outage every other week, for years now.
> why developers like it so much
most everything else is much worse in at least one area and the most important thing it's what everyone uses. no one got fired for using github.
The main thing I like about Github's PRs is that it's a system I'm already familiar with and have a login/account for. It's tedious going to contribute to a project to find I have to sign up for and learn another system.
I've used Gerrit years ago, so wasn't totally unfamiliar, but it was still awkward to use when Go were using it for PRs. Notably that project ended up giving up on it because of the friction for users - and they were probably one of the most likely cases to stick to their guns and use something unusual.
> Notably [go] ended up giving up on [gerrit]
That's not accurate. They more or less only use Gerrit still. They started accepting Github PRs, but not really, see https://go.dev/doc/contribute#sending_a_change_github
> You will need a Gerrit account to respond to your reviewers, including to mark feedback as 'Done' if implemented as suggested
The comments are still gerrit, you really shouldn't use Github.
The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github, and the review will accordingly be slower and more likely to be rejected, and none of the go core contributors use the weird github PR flow.
> The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github
I've always done it that way, and never got that feeling.
there's certainly a higher rejection rate for github PRs
That seems unsurprising given that it’s the easiest way for most people to do it. Almost any kind of obstacle will filter out the bottom X% of low effort sludge.
Many people confuse competence and dedication.
A competent developer would be more likely to send a PR using the tool with zero friction than to dedicate a few additional hours of his life to create an account and figure out how to use some obscure.
You are making the same mistake of conflating competence and (lack of) dedication.
Most likely, dedication says little about competence, and vice versa. If you do not want to use the tools available to get something done and rather not do the task instead, what does that say about your competence?
I'm not in a position to know or judge this, but I could see how dedication could be a useful proxy for the expected quality a PR and the interaction that will go with it, which could be useful for popular open source projects. Not saying that's necessarily true, just that it's worth considering some maintainers might have anecdotal experiences along that line.
A competent developer wouldn't call gerrit an obscure tool.
This attitude sucks and is pretty close to just being flame bait. There are all kinds of developer who would have no reason to ever have come across it.
A competent developer should be aware of the tools of the trade.
I'm not saying a competent developer should be proficient in using gerrit, but they should know that it isn't an obscure tool - it's a google-sponsored project handling millions of lines of code internally in google and externally. It's like calling golang an obscure language when all you ever did is java or typescript.
It’s silly to assume that someone isn’t competent just because you know about a tool that they don’t know about. The inverse is almost certainly also true.
Is there some kind of Google-centrism at work here? Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.
> The main thing I like about Github's PRs is that it's a system I'm already familiar with and have a login/account for. It's tedious going to contribute to a project to find I have to sign up for and learn another system.
codeberg supports logging in with GitHub accounts, and the PR interface is exactly the same
you have nothing new to learn!
> having used gerrit 10 years ago there's nothing about github's PRs that I like more, today.
I love patch stack review systems. I understand why they're not more popular, they can be a bit harder to understand and more work to craft, but it's just a wonderful experience once you get them. Making my reviews work in phabricator made my patchsets in general so much better, and making my patchsets better have improved my communication skills.
>a well-formed CI system with many developed actions and free runners.
It feels to me like people have become way too reliant on this (in particular, forcing things into CI that could easily be done locally) and too trusting of those runners (ISTR some reports of malware).
>In addition, it is best to use code navigation simply in a web browser.
I've always found their navigation quite clunky and glitchy.
> a well-formed CI system
Man :| no. I genuinely understand the convenience of using Actions, but it's a horrible product.
Maybe I have low standards given I've never touched what gitlab or CircleCi have to offer, but compared to my past experiences with Buildbot, Jenkins and Travis, it's miles ahead of these in my opinion.
Am I missing a truly better alternative or CI systems simply are all kind of a pita?
My issue with Github CI is that it doesn't run your code in a container. You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.
> My issue with Github CI is that it doesn't run your code in a container.
Is this not what you want?
https://docs.github.com/en/actions/how-tos/write-workflows/c...
> You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.
Yeah checking out everytime is a slight papercut I guess, but I guess it gives you control as sometimes you don't need to checkout anything or want a shallow/full clone. I guess if it checked out for you then their would be other papercuts.
I use their runners so never need to do any cleanup and get a fresh slate everytime.
Curious what are some better options. I feel it is completing with Jenkins and CircleCI and its not that bad.
I'd rather solve advent of code in brainfuck than have to debug their CI workflows ever again.
Surely you just need the workflow to not have embedded logic but call out to a task manager so you can do the same locally?
The big issue with Github is that they never denied feeding ai with private repositories. (Gitlab for example did that when asked). This fact alone makes many users bitter, even for organizations not using private repos per se.
> Having a sponsorship system is also great
They have zero fees for individuals too which is amazing. Thanks to it I gained my first sponsor when one of my projects was posted here. Made me wish sponsorships could pay the bills.
> In addition, it is best to use code navigation simply in a web browser
How do you define "code navigation"? It might've got a bit easier with automatic highlighting of selected symbols, but in return source code viewer got way too laggy and, for a couple of years now, it has this weird bug with misplaced cursors if code is scrolled horizontally. I actually find myself using the "raw" button more and more often, or cloning repo even for some quick ad-hoc lookups.
Edit: not to mention the blame view that actively fights with browser's built in search functionality.
> In addition, it is best to use code navigation simply in a web browser.
IMHO the vanilla Github UI sucks for code browsing since it's incredibly slow, and the search is also useless (the integrated web-vscode works much better - e.g. press '.' inside a Github project).
> as well as a well-formed CI system with many developed actions and free runners
The only good thing about the Github CI system are the free runners (including free Mac runners), for everything else it's objectively worse than the alternatives (like Gitlab CI).
Would you say Github has any significant advantages over Gitlab in this regard? I always found them to be on par, with incremental advantages on either side.
One of my favourite GitHub features is the ability to do a code search over the whole of GitHub, not sure GitLab has the same when I use to use it?
> a pioneering (but no longer new) PR system
Having used Forgejo with AGit now, IMO the PR experience on GitHub is not great when trying to contribute to a new project. It's just unnecessarily convoluted.
Underrated feature is the code search. Everyone starts out thinking they’ll just slap elastic search or similar in front of the code but it’s more nuanced than that. GitHub built a bespoke code search engine and published a detailed blog post about it afterwards.
Embrace, extend, extinguish.
That's not a Victorinox you're looking at, it's a cheap poorly made enshittified clone using a decades old playbook (e-e-e).
The focus on "Sponsorship buttons" and feature instead of fixing is just a waste of my time.
I have sympathy for some of the GitHub complaints. otoh just went to try to signup for Codeberg and it's down ... 95% uptime over the last 2 weeks?
One can always host Forgejo themselves if a service level has to be kept under control. With Github that’s not even an option.
I would even consider that moving everything from one single point of failure to an other is not the brightest move.
> With Github that’s not even an option.
Github does offer a self hosted product: GitHub Enterprise Server
There have been complaints about it on Reddit as well. I registered an account recently and to me the annoying thing is the constant "making sure you are not a bot" check. For now I see no reason to migrate, but I do admit Forgejo looks very interesting to self-host.
https://tangled.org/ is building on ATProto
1. use git or jj
2. pull-request like data lives on the network
3. They have a UI, but anyone can also build one and the ecosystem is shared
I've been considering Gerrit for git-codereview, and tangled will be interesting when private data / repos are a thing. Not trying to have multiple git hosts while I wait
I, too, am extremely interested in development on Tangled, but I miss two features from GitHub - universal search and Releases. the web frontend of Tangled is so fast that I am still getting used to the speed, and jj-first features like stacked PRs are just awesome. kinda reminds me of how Linux patch submitting works.
I moved (from selfhost gitlab) to forgejo recently, and for my needs it's a lot better, with a lot less hassle. It also seems a lot more performant (again probably because I don't need a lot of the advanced features of gitlab).
> but I do admit Forgejo looks very interesting to self-host.
I've been self-hosting it for a few years now and can definitely recommend. It has been very reliable. I even have a runner running. Full tutorial at https://huijzer.xyz/posts/55/installing-forgejo-with-a-separ....
I mean, they're battling with DDoS all the time. I follow their account on Mastodon, and they're pretty open about it.
I believe the correct question is "Why they are getting DDoSed this much if they are not something important?"
For anyone who wants to follow: https://social.anoxinon.de/@Codeberg
Even their status page is under attack. Sorry for my French, but WTF?
Crazy. Who would have an incentive to spend resources on DDoS'ing Codeberg? The only party I can think of would be Github. I know that the normalization of ruthlessness and winner-takes-all mentality made crime mandatory for large parts of the economy, but still cannot wrap my mind around it.
Not just them. For example, Qt self hosted cgit got ddos just two weeks ago. No idea why random open source projects getting attacked.
> in the past 48 hours, code.qt.io has been under a persistent DDoS attack. The attackers utilize a highly distributed network of IP addresses, attempting to obstruct services and network bandwidth.
https://lists.qt-project.org/pipermail/development/2025-Nove...
Probably some little script kiddie fucks who think they are elite mega haxors and use their mommie's credit card to pay one of the ddos services readily accessible.
Big tech would be far more interested in slurping data than DDoS'ing them.
An issue with comments, linked to a PR with review comments, the commit stack implementing the feature, and further commits addressing comments is probably valuable data to train a coding agent.
Serving all that data is not just a matter of cloning the repo. It means hitting their (public, documented) API end points, that are likely more costly to run.
And if they rate limit the scrappers, the unscrupulous bunch will start spreading requests across the whole internet.
DDoS are crazy cheap now, it could be a random person for the lulz, or just as a test or demo (though I suspect Codeberg aren't a bit enough target to be impressive there).
Is it because the s in iot stands for security? I'm asking genuinely. Where are these requests coming from?
>The only party I can think of would be Github.
I think it's not malice, but stupidity. IoT made even a script kiddie capable of running a huge botnet capable of DDoSing anything but CloudFlare.
> Who would have an incentive to spend resources
That's not how threat analysis works. That's a conspiracy theory. You need to consider the difficulty of achieving it.
Otherwise I could start speculating which large NAS provider is trying to DDoS me, when in fact it's a script kiddie.
As for who would have the most incentives? Unscrupulous AI scrapers. Every unprotected site experiences a flood of AI scrapers/bots.
I think the goal is unclear, but the effect will be that Codeberg will be perceived as less of a real, stable alternative. Breaking in was not in my mind, but that will have the same effect, maybe even more damaging. Now, if that has been the intended effect, I hope I won't have to believe that.
Story time:
I remember that back in the day I had a domain name for a pretty hot keyword with a great, organic position in Google rankings. Then someday it got all of a sudden serious boost from black-SEO, with a bazillion links from all kinds of unrelated websites. My domain got penalized and dropped of from the front page.
Actually I think that's roughly how threat analysis works though.
For threat analysis, you need to know how hard you are to break in, what the incentives are, and who your potential adversaries are.
For each potential adversary, you list the risk strategy; that's threat analysis 101.
E.g. you have a locked door, some valuables, and your opponent is the state-level. Risk strategy: ignore, no door you can afford will be able to stop a state-level actor.
I concur the question, "Who would have an incentive to spend resources on DDoS'ing Codeberg?" is a bit convoluted in mixing incentive and resources. But it's still, exactly, threat analysis, just not very useful threat analysis.
its easier for MS to buy codeberg and close it than to spent time and money to DDOS things
How do you buy an e.V.?
You goes to BYD dealership???
I said e.V., not EV. Codeberg is an e.V., i.e. a "registered association" in Germany. I am not actually sure if you could technically buy an e.V., but I am 100% certain that all of the Codeberg e.V. members would not take kindly to an attempt at a hostile takeover from Microsoft. So no, buying Codeberg is not easier than DDoSing them.
they can't buy the orgs but they can buy the codeberg or its member
which is basically the same thing
Part of the problem is that Codeberg/Gitea's API endpoints are well documented and there are bots that scrape for gitea instances. Its similar to running SSH on port 22 or hosting popular PHP forums software, there are always automated attacks by different entities simply because they recognize the API.
That's rough ... it is a bad, bad world out there.
Try exposing a paswordless SSH server to outside to see what happens. It'll be tried immediately, non-stop.
Now, all the servers I run has no public SSH ports, anymore. This is also why I don't expose home-servers to internet. I don't want that chaos at my doorstep.
Yeah no need for public ssh. Or if you do pick a random port and fail2ban or better just whitelist the one IP you are using for the duration of that session.
To avoid needing SSH just send your logs and metrics out and do something to autodeploy securely then you rarely need to be in. Or use k8s :)
Whitelisting single IP (preferably a static one) sounds plausible.
Kubernetes for personal infrastructure is akin to getting an aircraft carrier for fishing trips.
For simple systems snapshots and backups are good enough. If you're managing a thousand machine fleet, then things are of course different.
I manage both so, I don't yearn to use big-stack-software on my small hosts. :D
This is just FUD, there is nothing dangerous in having an SSH server open to the internet that only allows key authentication. Sure, scanners will keep pinging it, but nobody is ever going to burn an ssh 0day on your home server.
> This is just FUD.
No, it's just opsec.
> Sure, scanners will keep pinging it, but nobody is ever going to burn an ssh 0day on your home server.
I wouldn't be so sure about it, considering the things I have seen.
I'd better be safe than sorry. You can expose your SSH if you prefer to do so. Just don't connect your server to my network.
"opsec" includes well defined things like threat modeling, risk factors, and such. "Things I have seen" and vague "better safe than sorry" is not part of that.
There are two golden rules of opsec:
1. Never tell everything you know and seen.
2.
For what I do, you can refer to my profile.Yeah, I have been thinking about hosting a small internet facing service on my home server, but I’m just not willing to take the risk. I’d do it on a separate internet connection, but not on my main one.
You can always use a small Hetzner server (or a free Oracle Cloud one if you are in a pinch) and install tailscale to all of your servers to create a P2P yet invisible network between your hosts. You need to protect the internet facing one properly, and set ACLs at tailscale level if you're storing anything personal on that network, though.
I would probably just ssh into the Hetzner box and not connect it to my tailnet.
Would tailscale or cloudflare do the trick. Let them connect to the server.
this can be fixed by just using random ssh port
all my services are always exposed for convenience but never on a standard port (except http)
It reduces the noise, yes, but doesn't stop a determined attacker.
After managing a fleet for a long time, I'd never do that. Tailscale or any other VPN is mandatory for me to be able to access "login" ports.
Just a reminder, Codeberg is for open source projects only, and maybe some dotfiles and such. Its on their frontpage and in their TOS.
GitHub uptime isn't perfect either. You will notice these outages from time to time if your employer is using it for more than just "store some git repos", e.g. using GHA for builds and deploys, packages etc.
What? It says it's up for 98.56% for the last 2 weeks.
That's probably the average. But if Codeberg Translate shines with 99.58%, it is an unnecessary entry which harms the "92.42% Codeberg.org" reality.
Because they are Codeberg I'm betting they have a philosophical aversion to using a cloud based ddos protection service like Cloudflare. Sadly the problem is that noone has come up with any other type of solution that actually works.
Additional note on Codeberg, which I think is great as a project, but I got curious on what infrastructure they are running on and how reliable this would be for larger corporate repos.
Nov 22, 2025 https://blog.codeberg.org/letter-from-codeberg-onwards-and-u...
Quotes from their website:
Infrastructure status [...] We are running on 3 servers, one Gigabyte and 2 Dell servers (R730 and R740).
Here's their current hardware: https://codeberg.org/Codeberg-Infrastructure/meta/src/branch...
[...] Although aged, the performance (and even energy efficiency) is often not much worse than with new hardware that we could afford. In the interest of saving embodied carbon emissions from hardware manufacturing, we believe that used hardware is the more sustainable path.
[...] We are investigating how broken Apple laptops could be repurposed into CI runners. After all, automated CI usage doesn't depend on the same factors that human beings depend on when using a computer (functioning screen, speakers, keyboard, battery, etc.). If you own a broken M1/M2 device or know someone who does, and believe that it is not worth a conventional repair, we would be happy to receive your hardware donation and give it a try!
[...] While it usually holds up nicely, we see sudden drop in performance every few days. It can usually be "fixed" with a simple restart of Forgejo to clear the backlog of queries.
Gives both early-Google as well as hackerspace vibes, which can or can not be a good thing.
https://status.codeberg.eu/status/codeberg
Their reliability is not great unfortunately. Currently their 24h uptime is 89% for the main site. They are partially degraded right now.
The 14 day uptime is 98% but I think that’s actually because some of their auxiliary systems have great uptime, the main site is never that great it seems.
This isn't a great time for them: https://social.anoxinon.de/@Codeberg/115652289949965925
To be fair, Codeberg isn’t for corporate repos, it’s for FLOSS projects. Take a look at their Terms of Use. They don’t aim to be a commercial provider, rather the opposite.
oh wow I had a larger cluster than that since I was 20 more than half a decade ago, considering that the costs appear to be so low maybe I should also pop out few free services since at the moment I pay $600+ just on power costs alone for idle hardware on my personal cluster. If anyone has any ideas feel free to email me at: news.ycombinator.com.reassure132@passmail.net
The main function of GitHub is really just advertising or at least broadcasting your work. I would use GitHub issues, stars, etc as an (imperfect gauge) of the quality of a library. This is not because of GitHub's features, just that it's the biggest and most well known. And yes I know buying stars is a thing, which is why it's part of the evaluation and not the whole ballgame.
Now that zig is fairly well known and trusted, it makes sense that this is less of a concern for them when migrating away.
They also made the disastrous update to the dashboard feed which made the frontpage pretty much useless.
Their most most recent update replaces all this with a list of recently updated PRs and issues. I've been learning on it heavily since it came out. One of the few recent changes that really feels like a clear improvement.
oh wow. I just had to press the "Try the new experience!" button about ten times for it to finally load the new experience, but I like it
Haven't used the dashboard in years. What's on it now might be more useful. The homepage for me should be set to Notifications.
At any rate, the feed is still available and you can reach it via browser autocomplete. I open GitHub by typing "not" in my URL bar and landing on the notifications page.
Seeing the decline of GitHub in Actions is technically correct, but Actions was always broken. We tried getting self-hosted runners to work super early before there was a proper ephemeral mode (just an officially unsupported race-condition-y --once flag). It sucked. That code can't produce a consistent status code, constantly failed to connect to its scheduler with obscure Azure error codes and had so many races with accepting and timing out jobs. Runners wouldn't get new jobs, jobs would sit there for an hour and then time out, runners would just die and need to be re-provisioned (we used ephemeral VMs in a GCP instance group). This is all because Actions is actually Azure DevOps Pipelines rebranded.
Compared to then this product is downright mature now. And also, there always were people at GitHub who delivered crappy products outside the core that most people working on FOSS got to see. Enterprise Cloud has a ton of sharp edges and things that make you ask "WHY" out loud. Notifications with SAML enabled are broken in countless ways (I have 0 out of 12 notifications right now), newly onboarded users are encouraged to click a "request copilot" button that sends emails to admins you can't disable, policy toggles that used to do one thing get split up and not defaulted properly. The last two in particular are just dark pattern hacks to get people to use Copilot. In an enterprise product.
I haven't used GHES, but I imagine it's worse.
LLMs are useful, but AI is itself a marketing term that has begun to lose its luster. It’s rapidly becoming an annoying or trendy label, not a cutting edge one.
I guarantee that in ~24 months, most AI features will still remain in some form or another on most apps, but the marketing language of AI-first will have evaporated entirely.
> AI is itself a marketing term that has begun to lose its luster. It’s rapidly becoming an annoying or trendy label, not a cutting edge one.
Where have you been the last 15 years? However, I agree with your prediction. Coke making AI advertisements may have had cache a couple years ago, but now would be a doofus move.
Have you watched broadcast TV lately? Every single advert is AI generated. Pay attention and you’ll see the telltale signs: stitched together 3 second clips with continuity problems, every showdown from a fixed set of compositions, etc. it’s just less noticeable to the average viewer than that coke ad.
I don’t remember AI being used as a widespread marketing term until 2-3 years ago. Before that it was just more of a vague tech thing you’d sometimes see, but now every single app seems to have reframed their business to be about AI agents.
There have been at least 3 waves of AI before the LLM generation. 70s , 80s and late 90s.
https://en.wikipedia.org/wiki/AI_winter
Early 2010s had a lot of neural networks AI stuff going on and it certainly became a minor hype cycle as well though that kind of resulted in the current LLM wave.
There was also a small chatbot bubble around 2014-2016 (Microsoft Tay kinda blew it out of the water, and it never recovered), though companies did seem a bit skittish about using the term 'AI' at that point.
Yes I know that, but those were all largely confined to technology companies and academia. This recent wave seems to affect everything.
To be fair this has more to do with Github Actions than Github, which from the beginning was never really going to rival any professional solution.
The people at Zig should use proper CI tools and not something that a big service provider is offering as an afterthought.
Our CI workflow literally just invokes a plain old shell script (which is runnable outside CI). We really don't need an overcomplicated professional CI/CD solution.
One of the nice things about switching to Forgejo Actions is that the runner is lightweight, fast, and reliable - none of which I can say for the GitHub Actions runner. But even then, it's still more bloated than we'd ideally like; we don't need all the complexity of the YAML workflow syntax and Node.js-based actions. It'd also be cool for the CI system to integrate with https://codeberg.org/mlugg/robust-jobserver which the Zig compiler and build system will soon start speaking.
So if anything, we're likely to just roll our own runner in the future and making it talk to the Forgejo Actions endpoints.
Which professional solution do you prefer?
Im using Jenkins, which i know is controversial here, but it has been rock solid since years for me.
And there exist a lot of specialized solutions out there, where the business model is purely CI.
What is wrong with GitHub Actions other than the outages? I've never hit a issue myself.
Is anything broken on the pure Git side of Github? From this, it's clear that actions and runners are becoming unusable. But are repositories still safe?
The outages break `git push`. I'm not a fan of the AI adoption within the UI, and the side bars when browsing code usually get in the way. Using GitHub as a dumb git backend isn't a great option either, look at the Linux kernels PRs, it's almost all spam. Why on earth can't PRs be disabled?
https://github.com/torvalds/linux/pulls?q=is%3Apr+is%3Aclose...
One thing that's really nice about codeberg is how fast the pages load. Browsing GitHub often feels very sluggish. Obviously there's a difference in scale there, but I hope codeberg can keep being fast.
Indeed. Github is a nightmare when I'm working on an unreliable 4G connection too (e.g. on a train in the UK). Half the page will load.
Night and day compared to something like Linear.
That is surprising. It is the opposite for me.
$ time curl -L 'https://codeberg.org/'
real 0m3.063s
user 0m0.060s
sys 0m0.044s
$ time curl -L 'https://github.com/'
real 0m1.357s
user 0m0.077s
sys 0m0.096sA better benchmark is done through the web browser inspector (network tab or performance tab). In the network tab I got (cache disabled)
Github
158 requests
15.56 MB (11.28 MB transferred)
Finish in 8.39s
Dom loaded in 2.46s
Load 6.95s
Codeberg
9 requests
1.94 MB (533.85 KB transferred)
Finish in 3.58s
Dom loaded in 3.21s
Load 3.31sThat depends on location and GitHub pages generally take a while to execute all the javascript for a usable page even after the html is fetched while pages on Codeberg require much less javascript to be usable and are quite usable even without javascript.
Here are my results for what it's worth
$ time curl -o /dev/null -s -L 'https://codeberg.org'
real 0m0.907s
user 0m0.027s
sys 0m0.009s
$ time curl -o /dev/null -s -L 'https://github.com/'
real 0m0.514s
user 0m0.028s
sys 0m0.016sSure, it depends on your internet connection. But for Codeberg I see a blank page for 3-4 seconds until it shows something. On a big repo like Zig the delay is even worse.
On Github any page loads gradually and you don't see a blank page even initially.
GitHub frontpage is very quick indeed, but browsing repos can sometimes have load times over a full second for me. Especially when it's less popular repos less likely to be in a cache.
Google workspace will have me do the same. No, I don't want to 'generate an image' I just want to use my own, thank you. They give their AI prime billing everywhere to the detriment of the products and the users.
previously discussed here: https://news.ycombinator.com/item?id=46064571
Migrating the main Zig repository from GitHub to Codeberg - 883 comments
Didn't know about codeberg and can't even access it... Is it https://codeberg.org/ ??
That is correct. It is down quite a bit. https://status.codeberg.org/status/codeberg
92% uptime? What do you do the other 8% of the time? Do you just invoke git push in a loop and leave your computer on?
You keep working since Git is decentralized.
You can also run a Forgejo instance (the software that powers Codeberg) locally - it is just a single binary that takes a minute to setup - and setup a local mirror of your Codeberg repo with code, issues, etc so you have access to your issues, wiki, etc until Codeberg is up and Forgejo (though you'll have to update them manually later).
I hope Codeberg is able to scale up to this surge in interest, but
> it is just a single binary that takes a minute to setup - and setup a local mirror of your Codeberg repo with code, issues, etc so you have access to your issues, wiki, etc
is really cool! Having a local mirror also presumably gives you the means to build tools on top, to group and navigate and view them as best works for you, which could make that side of the process so much easier.
> you'll have to update them manually later
What does the manually part mean here? Just that you'll have to remember to do a `forgejo fetch` (or whatever equivalent) to sync it up?
As discussed elsewhere in this thread: They're under DDoS, and have been very public about this fact.
I totally agree, Microsoft is ruining everything with AI, like all Microsoft product have been on decline for years even before the LLM era, and now they are on an even steeper decline.
it makes me sad to see that github is now going through the same shit, and people are using other random half-ass alternatives, it’s not easy to keep track of your favourite open-source projects across many source forgeries. we need someone to buy github from Microsoft and remove all the crap they have added to it.
Or create an overview that keeps track of projects across multiple source control providers, using a consistent interface.
> it’s not easy to keep track of your favourite open-source projects across many source forgeries.
Most public forge instances and web presence for open source projects have RSS feeds.
niche language does something, cool story
Yep, all my new stuff is on Gitlab.
maybe I’m out of the loop, but what is the “obsession” with AI that’s ruining it? GitHub still works for me like it always has. How are other people using GitHub?
Maintain on codeberg, mirror to GH. Tell everyone to contribute on CB
done.
One problem is that GH gives you no way to disable PRs. And even if you write in BIG BOLD LETTERS that PRs should be on Codeberg not GH, people get upset and make a fuss over their "ignored" PRs and it ends up making unnecessary headaches for you over and over.
Can do it the @torvalds/linux way and have a bot auto-answer/close PRs. And, to be honest, probably is better to ignore people making a fuss over PRs.
This seems workable to me. Github rose to prominence on the back of oss. What oss giveth oss can take away.
Maybe this is a nice chance to ask, would you move from Gitlab to Github? I would say no, but some people in my org are proposing it, it seems to me simply because the integration it has with AI tools, but my experience has been worse in Github than with Gitlab.
For me they're both about equally shitty, but with Github you get a nice commit calendar to show off to recruiters - so Github wins IMO.
A lot of these forced "AI" integrations are essentially Clippy on steroids. A more careful approach focusing on use cases the technology can really support would be much preferred.
I think once Codeberg becomes federated, it will likely attract a lot of people.
Right now github is great for discovering and tracking projects, reflecting growth via the star and fork system (although a bit broken in the last few years).
If a federated layer is applied to these github alternatives, you could have an account in Codeberg, and be able to track lots of projects wherever people want to host them. Right now, I see a lot of forgejo servers, but I don't want to register in all of them.
+1 - I also see a huge opportunity for forgejo to become a new stackoverflow if they add federation
The primary issue with SO was that it was disconnected from the actual communities maintaining the project. A federated solution would be able to have the same network effects while handing ownership to the original community (rather than a separate SO branch of the community)
This resonates with me. Last week I got stuck on a bug where GitHub actions was pulling ARMv7 docker images when I specifically requested ARMv8. Absolutely impossible to reproduce locally either.
Am I in the minority when I actually like those AI features on GitHub? The ability to interrogate any open source codebase is __amazing__, this feature alone has saved me days of work/research. The AI code reviews are nothing to write home about, but occasionally catch stuff that I would've missed, a net benefit for me. I don't really get all the outrage... Sure, having an "Ask AI" Clippy-like thing in your face everywhere gets old quick, but at least on GitHub I find it non-obtrusive and often actually useful.
...you can just clone the repository and do that interrogation locally with the AI tool of your choice.
Every single application or webpage having its own AI integration is seriously one of the dumbest ideas in computing history (instead of separating applications and AI tools and let them work together via standardized interfaces).
I don't get it, why did they allow GitHub bot to modify and merge pull request automatically? Yeah I agree that MS is ruining everything with AI, but this problem is avoidable, if they turn off the bot's auto merge feature, or turn it off completely. The reason they move to a lesser known Git provider sounds more like a marketing stunt.
> I don't get it, why did they allow GitHub bot to modify and merge pull request automatically
They didn't, poor wording on Register part. The pull request was closed for inactivity by the bot.
What are you referring to? I may be missing a line from the article but it seems mostly focused around a lingering GitHub Actions bug and the direction of GitHub.
> The reason they move to a lesser known Git provider sounds more like a marketing stunt.
We had technical problems that GitHub had no interest in solving, and lots of small frustrations with the platform built up over years.
Jumping from one enshittified profit-driven platform to another profit-driven platform would just mean we'd set ourselves up for another enshittification -> migration cycle later down the line.
No stunt here.
Why does ever HN thread read like a churlish blogger review of the latest installment of <popular-scifi-franchise>?
Github is great. It barely changes at all and yet it's still too much for this originalist crowd.
I hate these constant drama posts, but I am all for seeing competition. I think it's good to have a couple of top-tier companies offering the same service, and especially with git, it's been... lacklustre outside of Github, I'd say. Bitbucket was totally nice, but Atlassian and Jira and meh... Github has (mostly) steered clear of cross-product promotions until the CoPilot era washed all over us, and I wonder for how long they can continue to thrive off the power of brand-awareness.
Same effect at play watching all the top-tier AI corps under heavy competitive fire still, trying hard to keep the audience attached while battling to stay on top of (or keep up with) the competition. This mainly (for now) benefits the user. If OpenAI were to trailblaze on their own, we'd all be paying through the roof for even the most basic GPT by now.
> I think it's good to have a couple of top-tier companies offering the same service
"top-tier" is not a term I would use to describe Microsoft
I like AI changes. Can change files from UI, will fill commit message for me. That's awesome.
Last week the reason for the move was MS tools being used by the baddies. Today AI is the baddie du jour. To use a great quote "either do or don't, but I got places to be".
The original post was specifically about technical grievances, “MS tools being used by the baddies” was mentioned only in passing.
https://ziglang.org/news/migrating-from-github-to-codeberg/
> Putting aside GitHub’s relationship with ICE
That was the extent of it. Six words.
Furthermore, this submission is an independent post, not from Zig, reporting on the original and adding more context.
> To use a great quote "either do or don't, but I got places to be".
What exactly is your complaint? The move had already been completed at the time of the original Zig post. They did do it.
There’s no incongruence between posts. The nature of your discontent or how it could possibly affect you isn’t clear in the slightest.
> I got places to be
Like, reading and posting on Hacker News?
Haven't noticed any AI problems or annoyances on GH.
More and more projects are moving to Codeberg, and I'm wondering; at what point will a critical mass be reached? Or will we end up with a fragmented ecosystem?
Oh no, our decentralized VCS will be… decentralized!
Seriously though the big problem to solve will be squatters, when there are three logical places for a module to be hosted. That could create issues if you want to migrate.
I would rather have this happening after a contender to git has surfaced. Something for instance with more project tracking built in so migration were simpler.
> Seriously though the big problem to solve will be squatters, when there are three logical places for a module to be hosted
I suspect Codeberg, which is focused on free software, will frown on them. They already disallow mirroring.
> They already disallow mirroring.
In which direction? (I'd check myself but they're down...). That doesn't sound very open to me.
I was slightly wrong. You can manually mirror things, but they have removed a feature that allowed one to automatically mirror repositories hosted elsewhere. It was originally intended as an ease of migration tool, but ended up consuming too many resources.
From their FAQ:
> Why can't I mirror repositories from other code-hosting websites?
> Mirrors that pull content from other code hosting services were problematic for Codeberg. They ended up consuming a vast amount of resources (traffic, disk space) over time, as users that were experimenting with Codeberg would not delete those mirrors when leaving.
> A detailed explanation can be found in this blog post.[1]
[1]: https://blog.codeberg.org/mirror-repos-easily-created-consum...
> fragmented ecosystem
This sounds a bit like an oxymoron. More diversity will only help the ecosystem IMHO.
You say fragmented I say decentralized.
I say "I'm not making yet another account to report this bug". Tangled is trying to solve that problem but we'll see.
That's the beauty of email-based approaches. You can just clone, do your changes and `git send-email`. Done.
I think it would've been far easier to build a decent GUI around that flow, with some email integration + a patch preview tool, rather than adding activitypub, but oh well.
> I think it would've been far easier to build a decent GUI around that flow, with some email integration + a patch preview tool, rather than adding activitypub, but oh well.
Check out Sourcehut (https://sourcehut.org/). It uses a mailing list-based workflow so contributing code or bug reports is relatively effortless and doesn't require a Sourcehut account.
I literally logged into codeberg using my GitHub account. It's two clicks of the mouse to do this.
[flagged]
Wait, how is Tangled VC controlled? As far as I Know, it's actually decentralised properly on atproto, with barely any bluesky dependencies?
Is it not backed by a registered in Finland limited liability company? Haven't they acquired pre-seed funding by Antler, a VC company?
So how many bugs did you file on sourceforge when GitHub hadn’t quite killed it off?
I used to submit quite a few back in the day. How many projects are still actively maintained on Sourceforge? The last time I needed to go there was to get the GPC (General Polygon Clipper) library with the last modification in 2014.
Maybe I wasn't quite clear. As an open-source author, bug reports are what makes open-source feel like a job. This is because Github has created a sense of entitlement that an open-source project is supposed to take bug reports. That its authors are its 'maintainers' and are expected to fix them.
No. You are the person with an issue. You have all the means to fix the issue -- the source code has been shared with you. Now go ahead and fix your bug yourself. Then share the source code with your users as per its license.
Notice how I don't even care much for 'pull requests'. Another detrimental notion started with Github -- that the authors of an open-source project are expected to review change requests and merge them.
Guy, open-source licenses do not require you to share the derived code with upstream. They require you to share it with your users. I, as the original author, mostly don't care as the original code I wrote works for me.
Yes, sending fixes back upstream is a courtesy and a way to thank the original authors. However it is neither required, nor one must expect that the fixes will be accepted or even looked at at all.
All those different 'git forges' use git as version control system and the same issue and PR workflows. There is no fragmentation, unless you consider one git url being different from another git url 'fragmentation' ;)
Hopefully one of the efforts to build distributed pull requests will take off, so that all the forges other than github can band together and interoperate.
That would be the single best thing that they could do, it would make moving off of github a gain in capabilities.
I prefer a pletora of code hosting sites, that one massive hub controlled by a single one. We can see how bad is when there is a monopoly or cuasi-monopoly.
Lack of investment more like. There are a ton of simple and obvious bugs that have persisted well before the AI crazy, e.g. this annoying bug from 2021: https://github.com/orgs/community/discussions/6874
This one is almost a one-line change (technically they need an extra flag in the YAML but that's hardly difficult): https://github.com/orgs/community/discussions/12882#discussi...
That said, I still think Github is fine, and you can't argue with free CI - especially on Windows/Mac. If they ever stop that I'll definitely consider Codeberg. Or if Codeberg gets support for stacked PRs (i.e. dependencies between PRs), then I'm there! So frustrating that Github doesn't support such an obvious workflow.
Not spending on maintenance is bad.
Not spending on maintenance and spending gobs on something many people don’t want is far worse. It says we have the money, we just don’t give a fuck.
The evidence of AI failure is all this low hanging fruit maintenance fixes users are begging Microsoft to fix and these AI agents are not fixing them. AI was going to 10x engineers or something right? Why isn’t GitHub getting better with all this AI help?
Isn't this SOP of Microsoft since forever? Tons of papercuts which really hurt, and tons of features nobody wants?
I think this is the natural outcome of "chasing points" mechanic inside Microsoft.
> So frustrating that Github doesn't support such an obvious workflow.
It kind of does.
I used this a lot in several jobs to work in dependent tickets in advance. Just make another branch on top of the previous (a PR to the other PR branch).
People could review the child PR before the parent was merged. And it requires some less than trivial git knowledge to manage effectively, but nothing extraordinary. Any solution for stacked PRs based on git would also require it (or a custom tool).
I think I'm on their side on this one. From git perspective, it works just as I expect. Something else probably belongs to JIRA or project management instead.
That feels like the opposite of what I think stacked PRs are? Like someone will open PR #1 for one feature, and then PR #2 into the PR #1 branch, but it doesn't make sense without knowing the context of PR #1 so that gets reviewed first - and then when that PR gets merged, the second one gets automatically closed by GitHub?
PR#1: dough PR#2: toppings
You first send PR#1, then PR#2 on top of the first one.
The diff for PR#1 will show dough stuff. The diff for PR#2 will show toppings in relation to dough.
People can review them asynchronously. If you merge PR#1, PR#2 will automatically target main (that's where dough went) now.
In this arrangement, I use to cross-mention the PRs by number (a link will exist in both). I also like to keep the second one draft, but that depends on the team practices.
I don't understand why you would close the second PR when the first gets merged. It should lose the dependency automagically, which is exactly what happens if you branch correctly.
Look, how is number to go up without constant AI bullshit? Won't somebody think of the shareholders!
MS in particular _really_ seems to be sacrificing itself on the altar of Roko's Basilisk; they appear totally incapable of doing _anything_ that isn't AI-branded anymore.
Wait I thought they left because Github software engineers are "monkeys".
In other news today, Bun, which is one of the biggest projects written in Zig, joined Anthropic, the company behind Claude Code, and has nothing but kind words to say about AI. If Zig becomes ever more hostile to AI, I wonder if there may be some "friction" there.
Why would zig care that a project written in zig is used for AI?
Usually programming languages need that killer project to sell themselves, instead of being something only language nerds play with, Bun was one of such projects.
The article is very hard to read, with ads on one side, links in every other sentence. I could not even figure out where Zig has gone... TLDR anyone?
Edit: Scrolling comments I see something called Codeberg but why am I getting connection refused?
Another edit: Oh because Codeberg is down. I had to look at another thread on the frontpage to find that out...
Zig is distributed under the MIT License. MS is completely with in their rights to clone the git repository from Codeberg and do whatever with the source code. Including feeding it to their AI algorithms. Moving it to Codeberg doesn't really fix that. I get that some people want to restrict what people can do with source code (including using it for capitalist purposes or indeed ai/machine learning). But the whole point of many open source licenses (and especially the MIT license) is actually the opposite: allowing people to do whatever they want with the source code.
The Zig attitude towards AI usage is a bit odd in my view. I don't think it's that widely shared. But good for them if they feel strongly about that.
I'm kind of intrigued by Codeberg. I had never heard of it until a few days ago and it seems like that's happening in Berlin where I live. I don't think I would want to use it for commercial projects but it looks fine for open source things. Though I do have questions about the funding model. Running all this on donations seems like it could have some issues long term for more serious projects. Moving OSS communities around can be kind of disruptive. And it probably rules out commercial usage.
This whole Github is evil anti-capitalist stance is IMHO a bit out of place. I'm fine with diversity and having more players in the market though; that's a good thing. But many of the replacements are also for profit companies; which is probably why many people are a bit disillusioned with e.g. Gitlab. Codeberg seems structured to be more resilient against that.
Otherwise, Github remains good value and I'm getting a lot of value out of for profit AI companies providing me with stuff that was clearly trained on the body of work stored inside of it. I'm even paying for that. I think it's cool that this is now possible.
> Zig is distributed under the MIT License. MS is completely with in their rights to clone the git repository from Codeberg and do whatever with the source code. Including feeding it to their AI algorithms.
MIT license requires attribution, which AI algorithms don’t provide AFAIK. So either (a) it’s fair use and MS can do that regardless of the license or (b) MS can’t do that. In any case, yeah, that’s not the issue Zig folks have with GitHub.
> Zig is distributed under the MIT License. MS is completely with in their rights to clone the git repository from Codeberg and do whatever with the source code. Including feeding it to their AI algorithms. Moving it to Codeberg doesn't really fix that. I get that some people want to restrict what people can do with source code (including using it for capitalist purposes or indeed ai/machine learning). But the whole point of many open source licenses (and especially the MIT license) is actually the opposite: allowing people to do whatever they want with the source code.
MS training AIs on Zig isn't their complaint here. They're saying that Github has become a worse service because MS aren't working on the fundamentals any more and just chasing the AI dream, and trying to get AI to write code for it is having bad results.