Show HN: I built a tiny LLM to demystify how language models work

github.com

591 points by armanified 12 hours ago


Built a ~9M param LLM from scratch to understand how they actually work. Vanilla transformer, 60K synthetic conversations, ~130 lines of PyTorch. Trains in 5 min on a free Colab T4. The fish thinks the meaning of life is food.

Fork it and swap the personality for your own character.

fg137 - 2 hours ago

How does this compare to Andrej Karpathy's microgpt (https://karpathy.github.io/2026/02/12/microgpt/) or minGPT (https://github.com/karpathy/minGPT)?

totetsu - 3 hours ago

https://bbycroft.net/llm has 3d Visualization of tiny example LLM layers that do a very good job at showing what is going on (https://news.ycombinator.com/item?id=38505211)

algoth1 - an hour ago

This really makes me think if it would be feasible to make an llm trained exclusively on toki pona (https://en.wikipedia.org/wiki/Toki_Pona)

ordinarily - 10 hours ago

It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton

mudkipdev - 7 hours ago

This is probably a consequence of the training data being fully lowercase:

You> hello Guppy> hi. did you bring micro pellets.

You> HELLO Guppy> i don't know what it means but it's mine.

hackerman70000 - 4 hours ago

Finally an LLM that's honest about its world model. "The meaning of life is food" is arguably less wrong than what you get from models 10,000x larger

drincanngao - an hour ago

I was going to suggest implementing RoPE to fix the context limit, but realized that would make it anatomically incorrect.

zwaps - 7 hours ago

I like the idea, just that the examples are reproduced from the training data set.

How does it handle unknown queries?

Duplicake - 2 hours ago

I love this! Seems like it can't understand uppercase letters though

bblb - 4 hours ago

Could it be possible to train LLM only through the chat messages without any other data or input?

If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.

Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.

amelius - an hour ago

> A 9M model can't conditionally follow instructions

How many parameters would you need for that?

cbdevidal - 9 hours ago

> you're my favorite big shape. my mouth are happy when you're here.

Laughed loudly :-D

fawabc - 2 hours ago

how did you generate the synthetic data?

ankitsanghi - 7 hours ago

Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.

ben8bit - 4 hours ago

This is really great! I've been wanting to do something similar for a while.

brcmthrowaway - 7 hours ago

Why are there so many dead comments from new accounts?

ananandreas - 2 hours ago

Great and simple way to bridge the gap between LLMs and users coming in to the field!

kaipereira - 7 hours ago

This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)

kubrador - 7 hours ago

how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params

gnarlouse - 9 hours ago

I... wow, you made an LLM that can actually tell jokes?

gdzie-jest-sol - 4 hours ago

* How creating dataset? I download it but it is commpresed in binary format.

* How training. In cloud or in my own dev

* How creating a gguf

NyxVox - 9 hours ago

Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)

rclkrtrzckr - 6 hours ago

I could fork it and create TrumpLM. Not a big leap, I suppose.

SilentM68 - 10 hours ago

Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)

cpldcpu - 5 hours ago

Love it! Great idea for the dataset.

monksy - 6 hours ago

Is this a reference from the Bobiverse?

AndrewKemendo - 11 hours ago

I love these kinds of educational implementations.

I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple

Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.

nullbyte808 - 11 hours ago

Adorable! Maybe a personality that speaks in emojis?

Elengal - 3 hours ago

Cool

oyebenny - 7 hours ago

Neat!

- 9 hours ago
[deleted]
zephyrwhimsy - 19 minutes ago

[dead]

novachen - an hour ago

[dead]

Morpheus_Matrix - 10 hours ago

[flagged]

peifeng07 - 5 hours ago

[dead]

Alexzoofficial - 6 hours ago

[flagged]

zephyrwhimsy - 5 hours ago

[dead]

ethanmacavoy - 9 hours ago

[flagged]

agenexus - 9 hours ago

[flagged]

Morpheus_Matrix - 4 hours ago

[dead]

techpulselab - 4 hours ago

[dead]

george_belsky - 5 hours ago

[dead]

aesopturtle - 8 hours ago

[flagged]

weiyong1024 - 10 hours ago

[flagged]

zhichuanxun - 39 minutes ago

[dead]

aditya7303011 - 7 hours ago

[dead]

aditya7303011 - 8 hours ago

[dead]

LeonTing1010 - 9 hours ago

[flagged]

jiusanzhou - 6 hours ago

[flagged]

dinkumthinkum - 8 hours ago

I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.

martmulx - 9 hours ago

How much training data did you end up needing for the fish personality to feel coherent? Curious what the minimum viable dataset looks like for something like this.

Propelloni - 4 hours ago

Great work! I still think that [1] does a better job of helping us understand how GPT and LLM work, but yours is funnier.

Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).

So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.

[1] https://spreadsheets-are-all-you-need.ai/ [2] https://github.com/rasbt/LLMs-from-scratch

areys - 17 minutes ago

The constraint-driven approach here is what makes it actually useful as a learning tool. When you're working with ~130 lines of PyTorch, you can't hide behind abstractions — every design choice has to be explicit and intentional.

Curious: did implementing attention from scratch change how you think about the "key/query/value" intuition that gets used in most explanations? That's usually where the hand-waving happens in tutorials.