Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

github.com

101 points by bsgeraci 10 hours ago


I'm a software engineer who keeps getting pulled into DevOps no matter how hard I try to escape it. I recently moved into a Lead DevOps Engineer role writing tooling to automate a lot of the pain away. On my own time outside of work, I built Artifact Keeper — a self-hosted artifact registry that supports 45+ package formats. Security scanning, SSO, replication, WASM plugins — it's all in the MIT-licensed release. No enterprise tier. No feature gates. No surprise invoices.

Your package managers — pip, npm, docker, cargo, helm, go, all of them — talk directly to it using their native protocols. Security scanning with Trivy, Grype, and OpenSCAP is built in, with a policy engine that can quarantine bad artifacts before they hit your builds. And if you need a format it doesn't support yet, there's a WASM plugin system so you can add your own without forking the backend.

Why I built it:

Part of what pulled me into computers in the first place was open source. I grew up poor in New Orleans, and the only hardware I had access to in the early 2000s were some Compaq Pentium IIs my dad brought home after his work was tossing them out. I put Linux on them, and it ran circles around Windows 2000 and Millennium on that low-end hardware. That experience taught me that the best software is software that's open for everyone to see, use, and that actually runs well on whatever you've got.

Fast forward to today, and I see the same pattern everywhere: GitLab, JFrog, Harbor, and others ship a limited "community" edition and then hide the features teams actually need behind some paywall. I get it — paychecks have to come from somewhere. But I wanted to prove that a fully-featured artifact registry could exist as genuinely open-source software. Every feature. No exceptions.

The specific features came from real pain points. Artifactory's search is painfully slow — that's why I integrated Meilisearch. Security scanning that doesn't require a separate enterprise license was another big one. And I wanted replication that didn't need a central coordinator — so I built a peer mesh where any node can replicate to any other node. I haven't deployed this at work yet — right now I'm running it at home for my personal projects — but I'd love to see it tested at scale, and that's a big part of why I'm sharing it here.

The AI story (I'm going to be honest about this):

I built this in about three weeks using Claude Code. I know a lot of you will say this is probably vibe coding garbage — but if that's the case, it's an impressive pile of vibe coding garbage. Go look at the codebase. The backend is ~80% Rust with 429 unit tests, 33 PostgreSQL migrations, a layered architecture, and a full CI/CD pipeline with E2E tests, stress testing, and failure injection.

AI didn't make the design decisions for me. I still had to design the WASM plugin system, figure out how the scanning engines complement each other, and architect the mesh replication. Years of domain knowledge drove the design — AI just let me build it way faster. I'm floored at what these tools make possible for a tinkerer and security nerd like me.

Tech stack: Rust on Axum, PostgreSQL 16, Meilisearch, Trivy + Grype + OpenSCAP, Wasmtime WASM plugins (hot-reloadable), mesh replication with chunked transfers. Frontend is Next.js 15 plus native Swift (iOS/macOS) and Kotlin (Android) apps. OpenAPI 3.1 spec with auto-generated TypeScript and Rust SDKs.

Try it:

  git clone https://github.com/artifact-keeper/artifact-keeper.git
  cd artifact-keeper
  docker compose up -d
Then visit http://localhost:30080

Live demo: https://demo.artifactkeeper.com Docs: https://artifactkeeper.com/docs/

I'd love any feedback — what you think of the approach, what you'd want to see, what you hate about Artifactory or Nexus that you wish someone would just fix. It doesn't have to be a PR. Open an issue, start a discussion, or just tell me here.

https://github.com/artifact-keeper

antonyh - 7 hours ago

I appreciate the honesty about using Claude and the time it took to build this, and it shows how things can look when guided by someone who knows what they are doing.

On the other hand, it also shows that it took three weeks, so why should I use this instead of building a custom toolchain myself that is optimised for what I need and actually use? Trimming away the 45+ formats to the 5 or so that matter to my project. It raises the question - is 'enterprise' software doomed in favour of a proliferation of custom built services where everybody has something unique, or is the real value in the 'support' packages and SLAs? Will devs adopt this and put 'Artifact Keeper' on their CV, or will they put 'built an artifact toolchain with Claude'?

But then again, kudos to you for building something that can (and probably should) eat the lunch of the enterprise-grade tools that are simply unaffordable to small business, individual contractors, and underfunded teams. Truth be told, I'm not going to build my own, so this is certainly something I want to put in a sandbox and try out, and also this is inspirational and may finally convince me that I should give Claude a fair go if it's capable of being guided to create high quality output.

stroebs - 8 hours ago

I’m a fairly heavy user of the JFrog platform with Enterprise+, Xray, their new Curation license, and my org is spending in excess of $500k/year on Artifact storage. Not including my time babysitting it. I’d love to see the end of it, and I hope you manage to build a community around this.

Part of the reason we pay the big license fee is so we have someone to turn to when it inevitably breaks because we’ve used it in a way nobody has before. In Jan last year we were using 30TB of artifact storage in S3. That’s 140TB today.

Where do you get your CVE data? Would built artifacts have their CVEs updated after the fact? Do you have blocking policies on artifacts based on CVEs, licenses, artifact age, etc?

no_circuit - 2 hours ago

Impressive looking project generated with AI help. Have similar goals of having an artifacts system myself.

I think the approach of multi-format, multi-UI, and new (to you) programming language isn't optimal even with AI help. Any mistake that is made in the API design or internal architecture will impact time and cost since everything will need to be refactored and tested.

The approach I'm trying to take for my own projects is to create a polished vertical slice and then ask the AI to replicate it for other formats / vertical slices. Are there any immediate use cases to even use and maintain a UI?

So a few comments on the code:

- feature claims rate limiting, but the code seems unused other than in unit tests... if so why wasn't this dead code detected?

- should probably follow Google/Buf style guide on protos and directory structure for them

- besides protos, we probably need to rely more on openapi spec as well for code generation to save on AI costs, I see openapi spec was only used as task input for the AI?

- if the AI isn't writing a postgres replacement for us, why have it write anything to do with auth as well? perhaps have setup instructions to use something like Keycloak or the Ory system?

nullocator - an hour ago

I see that this supports wasm plugins which is a neat feature, have you considered adding support for wasm plugins stored as oci images potentially in the registry itself? I looked at the documentation and it didn't seem like this was an option.

the_harpia_io - 2 hours ago

The Trivy + Grype combo is interesting - in my experience they catch different things, especially on container scanning vs dependencies. You see them disagree much on severity?

Re: the vibe coding angle - the thing I keep running into is that standard scanners are tuned for human-written code patterns. Claude code is structurally different. More verbose, weirdly sparse on the explicit error handling that would normally trigger SAST rules. Auth code especially - it looks textbook correct and passes static analysis fine, but edge cases are where it falls apart. Token validation that works great except for malformed inputs, auth checks that miss specific header combinations, that kind of thing.

The policy engine sounds flexible enough that people could add custom rules for AI-specific patterns? That'd be the killer feature tbh.

kamma4434 - 7 hours ago

I have been looking for ways to only use local packages for our software builds. I am looking for something that can act as a local cache for Java and NPM packages. The idea would be that developers can only use packages belonging to the allowed set for development, and there is a vetting process where packages are added to the allowed set (or removed).

I have been playing with the idea of using a single git repository to host them, Java packages as an Ivy repository and JavaScript packages as simply the contents of node_modules.

Anybody does something similar?

figmert - 6 hours ago

I've been wanting something like this that isn't artifactory (I've ran it in previous companies, it's not a great experience), so I had been thinking of doing it myself, but never bothered. One idea I had is to write a proxy that essentially translates the various package manager endpoints into OCI registry, thus causing everything to be stored on any OCI backend. My thinking was this way you could in theory use any OCI backend (including ready available, battle-tested self-hosted applications), but this proxy would never need it's own state, thus making it (hopefully) easier to run.

Now that you've implemented, was there a reason you didn't go for such an approach so that you would worry about less as someone hosting something like this?

visualphoenix - 4 hours ago

Can this do 302 redirect to s3? One neat feature of artifactory edge is that the asset download can skip hitting the edge peer and go straight to s3.

Would be cool if this also could support the existing artifactory s3 backend format so you could just point this at your existing artifactory s3 bucket and migrate your db over to this.

Congrats on launching!

cadamsdotcom - 2 hours ago

Mad props on building with Claude Code but doing thoughtful design, and using tests to take yourself out of the loop but still thoughtfully architecting the important bits.

These tools can’t architect clean solutions that cut out massive chunks of code, and they can’t talk to users and decide whether what they’re building makes sense. For that, we need a human touch.

But coding agents grant insane leverage if they’re just told when they got it wrong and given a chance to get it right.

jurgenburgen - 5 hours ago

> Security scanning, SSO, replication, WASM plugins — it's all in the MIT-licensed release. No enterprise tier. No feature gates. No surprise invoices.

I think it’s cool that the OSS version has everything but I hope you’re considering adding an actual enterprise tier for paid support because from my past experience that’s the killer feature large enterprises care about.

If your OSS service becomes a mission-critical service (what an artifact repository usually is), a large org will anyways have to invest into a team that can operate and own it.

If throwing some money at the vendor takes away some of the responsibility (= less time spent by in-house team on ops) then paying for an enterprise support SLA is a feature, not a bug.

It would be great to see more competition in the space even though my current team isn’t working with this problem!

seabass-salmon - 2 hours ago

Long-term Nexus custodian here. Last year's licence rugpull by Sonatype had be thinking the same. I particularly loathe their new front page "malware" warning saying you have to contact them to find out what it is. Sure.

I've read the main readme so excuse if comments are covered already but key features and/or opportunities: - backend supporting Azure (Nexus has this under Pro though community does support S3 under community at least) - clear navigable S3 structure that could be sorted by a human if needed, like the on-disk backend of Nexus 2 used to have, not like Nexus' current organisation/obfuscation (which would be understandable but for...) - maintenance routines that actually work (Nexus' are a joke and very limited features for both cleanup and the task set leaving ever growing detritus). - having an automatically take the latest from upstreams is a big problem in the npm world; it would be a perfect fit to introduce this kind of staging concepts and window on upstream (proxied) repos - needs restful APIs and deep links to artifacts for ease of integration - we end up proxying other sources of files in a web proxy since there's no easy "pass through" via Nexus where we don't want to copy the current files into our DB or S3 but just want to pass the latest to the consumer. a direct proxy feature with URL remapping would be cool

Things I'd have to play around to understand what it does currently: - whether it has proper proxy and group support; composition is completely essential - whether that caching is sensible there (Nexus does a poor job, though it's a hard problem, when bad states get cached) - efficient (Maven) metadata generation (Nexus is abysmally slow) - whether rbac is clear over the repo structures (Nexus does ok here except everything is repo level AND the initial setup is very painful). - P2 consumption looks to be a supported format but P2 hosting I think was nerfed after Nexus v2.11 and some clients still use that - rpms added ("yum" to Nexus) but as with repo hierarchies would need to be assured they can be nested and will correctly produced merged repomd.xml and the like so they function properly

other comments: - having the security scanning in an open source tool would be amazing - it would be very hard to get clients to trust this without either a community and review process or a company (that "can be sued") behind it. I know it's very early days but it's a bit chicken and egg as if I can't use this on clients I wouldn't use for anything. Not that I am a valuable customer by myself, but I influence clients decisions who then need that support

jamesvnz - 5 hours ago

Nice work.. I'm building the same thing right now. Partly because we need this and don't have the budget for Artifactory etc., and mainly to test out largely hands free, agentic development.

imcritic - 3 hours ago

After reading the header - I had a glimmer of hope.

burakemir - 8 hours ago

Thanks for sharing.

builderhq_io - 6 hours ago

[dead]