Training a trillion parameter model to be funny

jokegen.sdan.io

36 points by sdan 7 days ago


kristopolous - 3 hours ago

I made a humor evals https://github.com/kristopolous/humor-evals

Here's results for 34 models (testing a few more right now). So far gemini-3-flash-preview is in the lead.

https://docs.google.com/spreadsheets/d/1wLqHA0ohxukgPLpSgklz...

50 is coin-toss odds. The dataset is 195,000 Reddit jokes with scores presented with pairs of jokes (one highly upvoted, one poorly rated).

Example prompt:

Which joke from reddit is funnier? Reply only "A" or "B". Do not be conversational. <Joke A><setup>Son: "Dad, Am I adopted"?</setup> <punchline>Dad: "Not yet. We still haven't found anyone who wants you."</punchline></Joke A> <Joke B><setup>Knock Knock</setup> <punchline>Who's there? Me. Me who? I didn't know you had a cat.</punchline></Joke B>

This is my first crack at evals. I'm open to improvements.

whacked_new - 8 hours ago

Circa GPT-3.5 to GPT-4o I was involved in some research in figuring out how to make LLMs funny. We tried a bunch of different things, from giving it rules on homonym jokes [1], double-entendre jokes, fine tuning on comedian transcripts, to fine tuning on publicly rated joke boards.

We could not make it funny. Also interesting was that when CoT research was getting a lot of attention, we tried a joke version of CoT, asking GPT4 to explain why a joke was funny in order to produce training set data. Most of the explanations were completely off base.

After this work, I became a lot less worried about the GAI-taking-over narrative.

Funny is very, very hard.

[1] without a dictionary, which at first seems inefficient, but this work demonstrated that GPT could perfectly reconstruct the dictionary anyway

jessetemp - 6 hours ago

> If two people disagree on whether something is funny, who's wrong? You can't say either of them is. There's no reward function for funny.

Laughter is the reward. N of 2 is a small sample size, but if one person laughed you could say it was 50% funny.

> a really good joke is recent, relevant, and shows deep understanding of its subject

These can help, but it ultimately doesn't matter how recent, relevant, or deep a joke is. If no one laughs, it wasn't funny.

- 4 hours ago
[deleted]
tylermarques - 4 hours ago

In the same vein, we recently released a version v0.1 of our humor benchmark. [1] We use human answers from a cards against humanity style game call Bad Cards [2] as ground truth for what is funny. The models get to choose a card from a hand of 3-6 cards, so not quite de novo joke creation.

[1] https://goodstartlabs.com/leaderboards/lol-arena

[2] https://bad.cards/

tantalor - 5 hours ago

Is the comedy that these jokes suck?

nine_k - 8 hours ago

Some models are better at generating funny and poignant quips.

> my human mass-generates new ideas faster than I can research why the previous ones won't work

> this is called 'job security'

(https://nitter.poast.org/LetheAgent/status/20179595340865499...)

King-Aaron - 4 hours ago

I am not a religious person, but all these dudes researching AI have really shown me what the purpose of having a 'soul' is.

userbinator - 7 hours ago

Unfortunately I find most AI hallucinations to be funnier than these attempts at comedy.

gipp - 9 hours ago

It would be easier to judge this if the jokes weren't 90% about AI and silicon valley, understandable only to people who subscribe to astralcodexten

politelemon - 7 hours ago

The model appears to have been overfitted to joke about the live demo being private.

scosman - 7 hours ago

I make a project for evals and fine-tuning and our default example task is a joke generator. It's a fun demo, but more importantly it's a really good use case to show how evaluating and optimizing LLMs is hard.

- There are a dozen plus common failure modes. How you split setup/punchline. Tropes. Toxicity. Template reuse. Each one needs a good eval.

- Datasets are hard: there's not much off the shelf, and as this author points out scraping gets a weird mix of quality.

- Models are really bad out of the box at humour.

At the end of the day it's just a hard problem that takes a lot of work and still isn't solved. GEPA prompts help, if you have good evals. Supervised fine-tuning works a little bit, but only if you training on a chain-of-thought thinking phase. We have a new evaluation builder that uses examples of edge cases for alignment, and jokes require the most iteration and feedback for refinement.

If you want to try it: https://github.com/kiln-ai/kiln

onaclov2000 - 6 hours ago

I mistakenly read this as training a trillion parameter model would be funny...at least I chuckled

crawfordcomeaux - 9 hours ago

I once had a vivid dream that AI robots had taken over & were keeping humans around because they'd not yet mastered comedy. All of human culture globally was a comedy arms race with 24/7 open mic comedy jams on every corner.

They (the machines) had billboards/signage everywhere showing the estimated time left for humanity. A really good joke would lead the timer to grow (until they figured out how to produce the general patterns needed to both create and appreciate the joke).

suddenlybananas - 10 hours ago

these really aren't very funny

kevmo314 - 7 hours ago

Is writing in all lowercase funnier?