Let's Take Esoteric Programming Languages Seriously
feelingof.com85 points by strombolini 5 days ago
85 points by strombolini 5 days ago
OT but I couldn't stop laughing at the very first sentence of the transcript:
> One of the biggest goals of this show — our raisin detour, if you will...
This is as far as I got also, starting me on a tangent of whether this is a common misuse or if it was caused by something else like auto captioning.
This is the description of the episode, not a transcript. And 'raisin detour' is fairly obviously a joke.
I find it amusing, yet sad, that some here expect a podcast to exclusively be a source of information where every second delivers bite sized facts. What about entertainment? What about engaging with a topic for hours and eventually learn something that‘s not a fact, but a new perspective?
I am halfway through, been mostly banter. So far the criticisms they offered of the paper and ChatGPT apply to their podcast, which provides a semi-interesting meta-analysis but has not offered much in the way of knowledge, entertainment or perspective. It is fairly insufferable if you don't share their sense of humor and interest in being random.
Forgive my ignorance about AI, but had anyone tried a "nondeterministic" language that somehow uses learning to approximate the answer? I'm not talking about the current cycles where you train your model on a zillions of inputs, tune it, and release it. I mean a language where you tell it what a valid output looks like, and deploy it. And let it learn as it runs.
Ex: my car's heater doesn't work the moment you turn it on. So if I enter the car one of my first tasks is to turn the blower down to 0 until the motor warms up. A learning language could be used here, given free reign over all the (non-safety-critical) controls, and told that it's job is to minimize the number of "corrections" made by the user. Eventually it's reward would be gained by initializing the fan blower to 0, but it might take 100 cycles to learn this. Rather that train it on a GPU, a language could express the reward and allow it to learn over time, even though it's output would be "wrong" quite often.
That's an esoteric language I'd like to see.
Wouldn't this be an optimization problem, that's to say, something like z3 should be able to do - [1], [2]?
I was about to suggest probabilistic programming, e.g., PyMC [3], as well, but it looks like you want the optimization to occur autonomously after you've specified the problem - which is different from the program drawing insights from organically accumulated data.
[1] https://github.com/Z3Prover/z3?tab=readme-ov-file
[2] https://microsoft.github.io/z3guide/programming/Z3%20Python%...
Fractran is great for emulating quantum computers on classical hardware.
Yes. This is a very good podcast. Give it a chance.
I'm sorry, it's a really inefficient format. I don't want to sit and listen for two hours to what's most likely half an hour of content by reading. Just write down what you have to say already!
I guess you could do double-speed, but I find that somehow stressful.
Edit: I just read the paper. It took me 21 minutes. It's not long, only 11 pages.
For me, podcasts are useful for learning while I drive. They are also useful for refreshing my recollection.
Finally the are useful for synthesis…a podcast can talk about tenuously related topics that would not usually be appropriate for an academic paper; use analogies, metaphors, and similes; and simply go off topic and discuss other interesting ideas that turn out to be more applicable than the formal subject.
But again that’s for me, not someone else.
I don't particularly like the podcast format either, but it's not inherently less efficient. You can potentially do other tasks while listening to one which would be difficult while reading. I personally find it difficult to concentrate on the content of the podcast when I do this (I don't take in information well from auditory sources), but others don't (and some actually find it hard to remember things they read).
I sympathize, but just happened to listen to this episode over several days. The discussion actually adds a lot to the paper, and they seem very qualified to critique it. One of the guests(?) has written several esolangs. There must be a way to generate a transcript.
Slight spoiler: they have lots of criticisms of the paper.
That's Lu, one of the regular hosts now. All very bright and interesting people, different from each other. I think only Jimmy has a formal CS education, but he'll talk as much about philosophy sometimes.
Also, show notes link to the paper that they talk about that they do like much better.
I really enjoy listening to people talk about things. I get the same enjoyment out of talk radio and any news radio that is editorialized. I enjoy lots of shows on the various NPR member stations.
This format isn't inefficient, you're just judging it based having a different goal than it does.
Could you explain what you like about it? I feel like I'm missing something. I've listened to half an hour now and there have been a like five minutes of substance, the rest is self-references and jarring editing.
If I listen to a podcast I want to learn something, gain a new perspective, listen to a well-moderated conversation or at least laugh.
This podcast does none of those things. Literally doing nothing and letting my thoughts wander is more interesting than listening to this.
I agree with this. This a remarkably bad podcast. And also pretty bad paper to focus on. As the podcast was quite bad, I just read it and it was about nothing at all.
Like, it's a basically blogpost that muses about uhhh couple examples it pulled at random from esolang wiki and has literally no point. Beside prescriptive one. Formatted as a paper, which I admit takes some skills.
[dead]