Why I Chose Common Lisp
blog.djhaskin.com322 points by djha-skin 20 hours ago
322 points by djha-skin 20 hours ago
SBCL is a great choice! It's a surprisingly dynamic system (so are other CL implementations).
A while ago, I did some private work for someone, using SBCL, and sent her the native binary I built, then forgot about the whole thing.
The client came back with some new requirements much later, when the project was concluded, and the source code was already lost by that time.
I vaguely remembered how I did things, so I spawned a REPL from the old binary, went into the relevant CL package, wrote a new function that overrides the old behavior, and the client's problem was solved.
I did all those without the original source code. It was a small thing to fix, but I can't imagine making the same fix so swiftly with any other tech stack, when the source code is lost. I was deeply impressed.
That's pretty cool, but also, get version control
Version control won't help if the repo itself has been lost to the sands of time
Very true.
Get backups
Which the client might be responsible for because the contractor might not be allowed to retain the IP
You're "allowed". Just keep it to yourself.
You often will not be, explicitly by contract. Honestly in this case it’s easier to keep things clean.
If you do some work for someone one and decide to keep the source you wrote, so you can review it later in the privacy of your own thoughts, the only way you can get "caught" is if you release it some way that's recognizable.
It has been ages since I worked with Common Lisp, so I wonder: Would it be possible / make sense to have version control into the lisp image?
Possible: yes. Lisp image can hold arbitrary data, including source code, versions, etc. Specifically, arbitrary data can be attached to symbols.
Make sense: no, unless you have very special requirements. It would be very different from common practice. Images are still much more fragile than files outside, they are not designed to last across implementation releases.
(Note: some implementations might hold full code of every function in memory, that's not unreasonable.)
That has been done in the past and was never widely supported. But the landscape is now slightly different. One approach might be to use ASDF (the most popular systems management tool), which then would need to be upgraded to use versions (and ideally have it integrated into a repository mechanism -> Git or similar... Also what are versions in a decentralized world?). ASDF operations then would need to be able deal with version numbers.
A Lisp image would then know/track which versions of what code it has loaded/compiled/... Version information would be stored with the other source files.
> That has been done in the past
Could you share some examples of how the old systems did this?
Can't say what and Interlisp did, but the MIT Lisp Machine and here the commercial variant Genera.
Files are versioned. The Lisp machine file system has versions (same for DEC VMS and a few other file systems). Nowadays the file version is attached to the name&type. Thus editing a file and saving it creates a new file with the same name&type and an updated version number. I can edit/load/compile/open/... files by specifying its version number. If there is no version number specified, the newest file version is used.
A site shared a namespace server with system registry and several file servers.
Code, documentation, etc. is defined as a system, similar to what ASDF does. A system has major and minor version numbers and other attributes (like being experimental or being released). Each time one does a full compile on a system, its major version gets updated and it tracks which file versions it used for that compilation. Whenever one creates a patch (a change of the sources) to a system, the minor version gets updated. After 12 full compiles and 30 patches to the latest version we have system FOOBAR 12.30. This data is recorded in the system directory, thus outside of an image (called a World in Genera) in the file system.
Now I can use that system FOOBAR in another Lisp and "Load System FOOBAR :version Newest" -> This will load system FOOBAR 12.30 into its runtime. Then new patches may appear in the system. I can then in a Lisp say "Load Patches FOOBAR" and it will look into the system directory for new patches to the current loaded major version and load those patches one by one. This changes the running Lisp. It then knows that it has loaded, say, FOOBAR 12.45, what files it has loaded and in which files the respective code (functions, ...) are located.
If I want to ship a system to the customer or someone else, I might tag it as released and create a distribution (with sources, documentation or not). The distribution is a file (also possibly a tape, a bunch of floppy disks, a CDROM, or similar). A distribution can contain one or more systems and its files (in several versions). On another Lisp I can later restore the distribution or parts of it. The restored systems then can be loaded for a specific version. For example I can load a released system version and then load patches for it.
A saved image knows which versions of what systems it includes. It also knows which functions sources are where. It may also have corresponding documentation versions loaded.
Later a source versioning system (called VC) was added, but I haven't used that.
This is basically how Smalltalks work. In addition to the application, an image will generally include a development environment and all the sources to your application. I think Lisp machine Lisps were a lot closer to this too.
If you dig the old Xerox docs, some of them refer to Lisp as inspiration.
Also the Mesa / Cedar had a similar approach, but based on dynamic libraries, which Niklaus Wirth took great advantage on how Oberon and derived systems worked.
Incidentally, due to Rob Pikes admiration for Oberon, not only had Inferno the ACME editor from Plan 9, already influenced by how Oberon worked, the whole Limbo experience on Inferno is quite similar as well.
Unfortunately we still lack this kind of full stack OS/REPL experience on modern systems.
I worked on a system implemented in Smalltalk for a few years, and the truly integrated development process possible in a ST is a high I've been chasing ever since. And this was in a setting that had so many things going against it: I started working on it in 2016, and the last update of the runtime environment was from '99. We had a custom HTTP client, and home-brew bindings to OpenSSL that I cooked up with some mad science Python that parsed C headers to generate Smalltalk code. I even had to fix a bug where Winsock fds rolled over if you opened more than 8k sockets, because the runtime FFI bindings assumed a 16 bit wide return value (I assume the code dates back to the Win 3.11 days).
By all rights, it should have been _awful_. But in reality we were able to iterate quickly and actually make stuff, because the technical foundations of the system were rock solid, the team was good, and because the truly integrated Smalltalk development enviroment enables a super-tight development cycle. A joke I have with a former colleague from those days is how all modern languages are still fundamentally no different from the punch card days: You take your sources off to the batch processing center, and the results come out at the other end, while Smalltalk is future techonology from the 80s.
Yeah we get glimpses of it across JVM, .NET, PowerShell, Fish and so on, but hardly as it used to be.
We do have Pharo, Allegro and such, but their spotlight opportunity is gone.
Agree, but I think in this case would not help. It was not that a version was lost, but all. Had they had VC, probably the repo would have been deleted.
This is something I really like about Common Lisp, but after ~30 years of progress, I am a bit disappointed with improvements in the field of interactive development.
The process you describe is possible in JavaScript and many other languages as well. Updating a JAR is trivial, but doing it in a running system is uncomfortable.
The first thing to start with is versioning at the function, class, package, or module level. Combine that with versioning of data, and you've got my interest :)
Oh man — this has come the closest to getting me to want to really learn lisp — especially because the author is an avid fan of vim
That is a very interesting journey — mine was exactly opposite, after many years with Common Lisp, I moved to Clojure and wouldn't even think of going back. I find it intriguing that the author would want to move in the other direction, especially as concurrency was mentioned (one of the main reasons why I initially looked at Clojure).
I wonder what it was about babashka that didn't work for the author. When I need a quick script with batteries included, I use babashka and it works great.
I had already written large, nontrivial apps (linked in article) which required more libraries than babashka was written with, including ones I had written but also others. I therefore needed to run native-image on my own codebase, as it was not runnable from within babashka (at the time? I don't know if it is now).
Running native-image on an already established, not-written-for-it codebase is a nightmare. I just tried again some months ago on the linked code bases. native-image wouldn't budge. Kept getting hung up on I don't even know what, the errors were way too opaque or the app just misbehaved in weird ways.
Ok, that explains why babashka wasn't suitable. I still wonder, though, about the requirement to have an executable.
I still remember many years of reading comp.lang.lisp, where the #1 complaint of newcomers was that "common lisp cannot produce a native executable". I remember being somewhat amused by this, because apparently nobody expected the same thing from other languages, like, say, Python. But apparently things have changed over the years and now CL implementations can produce bundled executables while Clojure can't — how the tables have turned :-)
I think various Lisp implementations have their own way to do it, e.g save-lisp-and-die on SBCL.
But, if what you mean "executable" is "small compact executable like the one build by C/C++/Pascal, without extra Lisp runtime attached", perhaps you better look at something else, well like C.
You just need to use a lisp implementation that has a tree-shaker if you really care about binary size for since reason
Gerbil scheme gives you the tiny bins and fully static binaries. No need to cargo cult portions of quicklisp with the binaries.
Chicken Scheme makes very nice, tidy binary executables. "Hello World" is just 27K on Ubuntu.
There is already confusion. Things are different and the same words (executable, native, image, ...) mean slightly different things.
In the CL world it is usually relatively easy to create an executable. For example in SBCL I would just run SBCL and then save an image and say that it should be an executable. That's basically it. The resulting executable
* is already native compiled, since SBCL always compiles native -> all code is AOT compiled to machine code
* includes all code and data, thus there is nothing special to do, to change code for it -> the code runs without changes
* includes all the SBCL tools (compiler, repl, disassembler, code loader, ...) thus it can be used to develop with it -> the code can be still "dynamic" for further changes
* it starts fast
Thus I don't need a special VM or special tool to create an executable and/or AOT compiled code. It's built-in in SBCL.
The first drawback: the resulting executable is as large as the original SBCL was plus any additional code.
But for many use cases that's what we want: a fast starting Lisp, which includes everything precompiled.
Now it gets messy:
In the real world (TM) things might be more complicated:
* we want the executable to be smaller
* we want to get rid of debug information
* we need to include libraries written in other languages
* we want faster / more efficient execution at runtime
* we need to deliver the Lisp code&data as a shared library
* we need an executable with tuned garbage collector or without GC
* the delivery structure can be more complex (-> macOS application bundles for multiple architectures)
* we want to deliver for platforms which provide restrictions (-> iOS/Apple for example doesn't let us include a native code compiler in the executable, if we want to ship it via the Appstore)
* we want the code&data be delivered for an embedded application
That's in the CL world usually called delivery -> creating an delivered application that can be shipped to the customer (whoever that is).
This was (and is) typically where commercial CL implementations (nowadays Allegro CL and LispWorks) have extensive tooling for. A delivered LispWorks application may start at around 7MB size, depending on the platform. But there are also special capabilities of ECL (Embeddable Common Lisp). Additionally there were (and still are) specialized CL implementations, embedded in applications or which are used as a special purpose compiler. For example some of the PTC Creo CAD systems use their own CL implementation (based on a ancestor implementation of ECL), run several million lines of Lisp code and expose it to the user as an extension language.
> I moved to Clojure and wouldn't even think of going back.
It's always amusing to watch programmers arguing for superiority of their favorite language(s) over others, often bashing language features without clearly understanding their purpose.
And it is especially amusing to watch Lispers trying to argue with each other. "I chose Scheme", "I picked Clojure", "I moved to CL"... etc.
Bruh, I move from one Lisp to another based on my current needs. It's harder for me to choose new shoes than switching between Lisp dialects. Once you learn any Lisp to a sufficient level, the mental overhead between Lisps becomes almost negligible - it feels like practically operating the same language.
Sure, each Lisp is unique and they all have different flavors, but seriously, shouldn't we celebrate the diversity and be happy that we have at least one Lisp for every platform?
@jwr This isn't meant as criticism of your comment. I'm not arguing at all with what you wrote; Clojure is my favorite Lisp flavor as well. I'm just taking a sentence out of context and using it as a cue for my rant - don't be mad, we're cool.
Looks like vim-slime is essential to how you work with CL + vim. I've only used vim for not even 2 years, and came across vim-slime 6 months ago when working in ruby and wanting to quickly 'send' code from editor (neovim) to rails console. 2 months ago I launched a startup and for hours every day had to swat/fix repercussions of bugs that weren't apparent pre-launch (as well as doing via the console things users needed but weren't able to do because the feature to do it hadn't been built yet). It was daunting. I don't know how I'd have managed without vim + vim-slime. Probably a lot of copy/pasting from vscode. Vim + vim-slime was at least a 2x productivity improvement, and >2x increase in developer happiness.
Another huge benefit of vim and vim-slime is it is immediately valuable when you use/learn any new language. So long as the language has a REPL/console/interpreter that can be opened from the terminal or terminal emulator in any form (e.g. CL, ruby, python, bash etc etc etc) then vim + vim-slime will be a brilliant ~IDE. (Possibly the only thing I haven't been able to do but wanted to is 'send' code from neovim to the javascript console in chrome, which would be pretty awesome!)
A side note: I found doom-emacs very similar to vim, only needed ~10 or so new keyboard shortcuts to be productive in emacs. (I still much prefer vim, but I'm not so down on emacs).
> I don't know how I'd have managed without vim + vim-slime.
This is interesting -- I've worked with people who swear by common lisp, emacs, and SLIME.
I'm happiest with ruby and vim, but I have not tried vim-slime (nor even heard of it before, so thank you!).
But FWIW, my strategy for running larger bits of ad hoc code on the ruby/rails console is to:
1. Add the code to a persistent local file (e.g. "ops_console_tools.rb")
2. scp the file up to the target machine where I am running the irb/pry console
3. In the console, run `load '/PATH/TO/ops_console_tools.rb'`
4. Run the new code: `Ops::User::CustomReport.run(90.days.ago..)`
To keep things a bit more sane, all of the ad hoc ruby code is in modules, e.g. `Ops::User`.And it helps to include some code to clear constant definitions which would otherwise complain to STDERR if you update and reload the file multiple times.
None of this is as awesome as SLIME of course, but it's pretty tolerable with a bit of readline-style up-arrow and command-history conveniences.
Disclaimer: Of course, running ad hoc code in prod is frowned upon. But we're extolling the virtues of CL in this thread, so I'll confess to breaking best practices in environments where it's permissible! Also this process gives you syntax highlighting while editing without requiring config on target host, and you can include the file in version control for greater formality.
>Looks like vim-slime is essential to how you work with CL
slime has some issues for me (obviously not OP) and I am not convinced lisp and vim are a good pair. lem is getting pretty good and improving by the day, find it much better to work with than vim when it comes to lisp and vim is my primary editor.
I have been using Clojure and before that Racket using only vim and vim-surround for almost a decade now.
I am sure I have left some productivity on the table not investing in workflows like cider etc but I have gotten a decent workflow using just vanilla tmux, a repl pane and vim-surround
The % matcher in vim does so much heavy lifting, I've never felt limited by a lack of slurp and barf
I actually wrote my own tiny plugin to send snippets to the repl using nc and I'm still happy enough tearing the clojure repl up and down and copying stuff in by hand because dealing with repl state can be pain. Even though I have at times had repls open for months, there is a freedom in just tearing it all down and up again.
Clojure itself has plenty of functions to load in files or snippets to help as well
How do you learn vim-slime? I have used vim before, so I have basic skills there, but I get lost and run out of time when I try to figure out how the slime model works and how to create a lisp project.
Is there a tutorial you followed or a video you found useful? What was your starting fund of knowledge?
Great question - it's shocking easy to learn. Just three steps 1. Install (via vim-plug, lazy.nvim, or whatever vim plugin manager you're using), 2. configure it. Depending on your terminal the instructions are a little different, but it should only take a few moments due to the brilliant instructions found here: https://github.com/jpalardy/vim-slime/tree/main?tab=readme-o... I use kitty so I add two lines to kitty.conf and it's all ready to go. But it will depend on your terminal/terminal emulator. The instructions in the readme should have you covered.
Then 3. use it. This is shockingly easy, open two panes in your terminal with neovim on one side and REPL/interpreter on the other. For example I have neovim with my ruby file on the left pane and a rails console on the right (but on the right could be SBCL, python interpreter, or any other interpreter). In neovim, move the cursor to the line you want to run and press ctrl + c twice in quick succession. It will 'send' that line to the interpreter on the right pane and run that line!
Note: The first time you do this you may be asked which pane vim-slime should 'send' the code to, with the numbers displayed over the panes. For example in kitty I'm usually sending to pane 2, so I press: 2 enter enter. If it was pane 5, I'd press 5 enter enter etc.
If the line of code is immediately proceeded by another line(s) it will run that/those as well (for example, a multi-line Active Record query). It will do the same if there's one or more lines immediately above the current line. This takes a tiny bit of getting used to as you may unintentionally run lines immediately above/below the line for a short while.
That's all there is to it!
A few tips
- As explained above, ctrl + c ctrl + c will run the line under the cursor. But you can also select and run any code you want by selecting it with vim's visual mode and ctrl + c ctrl + c to run that selected code. For example, if you want to run part of a line, select it in visual mode and ctrl + c ctrl + c and it will run! Same for say a few hundred lines of code: select it all in visual mode (e.g. v ctrl + f ctrl + f then j or k to get to the exact line), then ctrl + c ctrl + c will run everything you selected.
- Rails specific: The rails console has a pager set to 'on' by default (this would necessitate back and forth between panes in order to press 'q' to quit out of the pager). So I turn it off by adding one line (IRB.conf[:USE_PAGER] = false) to ~/.vimrc or just .vimrc in the project directory.
Let me know if you have any questions/troubles.
Yep, I'm using the `slimv` plugin for vim and the `swank` server in a running `sbcl` instance in a second terminal tab. Since I'm on macOS, I could build a keyboard shortcut in vim that automates opening the 2nd terminal tab with the “Lisp machine + swank” when I say “connect” in vim. slimv/swank practically make vim an IDE for Lisp.
More than decade ago, I didn't understand an actual value of Lisp, but I remember this song well.
I always preferred this one: https://youtu.be/5-OjTPj7K54?si=TYuCLJ9_2WvRHcHM
Every programming language should have a music video!
Heh, every function in every Lisp needs its own song (:
Here's one for clojure.core/lazy-cat
https://suno.com/song/d012fa48-d5c1-46f4-8561-9a031cfb8925
Makes me sad that Clojure has mapcat and lazy-cat functions, but they've never made any effort to create mapdog and lazy-dog variants. I firmly believe that is the unique and only factor that has prevented Clojure from becoming a mainstream language at the top of the RedMonk chart.
As a counterpoint to author's use of vim-slime (not to say I don't believe author's commet of "I'm Okay, I Promise," but rather to communicate to others who are facing a similar choice:
I am a lifelong vim user (since elementary school in the early '90s), and I developed common lisp using vim for over a decade. I still use vim for nearly everything, but emacs as my Lisp IDE. Before evil-mode, I used the mouse and menus for 90% of what I did, and it was still an improvement over the best vim had to offer at the time (vim-slime existed back then, but would crash or hang regularly).
Author's vim setup is fairly good, but Emacs/slime is still better. They stopped using emacs because of RSI, but their vim setup defaults to "v a ( C-c C-c" to accomplish something that is "C-c C-c" in emacs/slime. They have altered it to be "v a ( <space> g" which begs the question of "why not remap keys in emacs?"
I am looking at CL myself, but my needs are more hobby than anything, but I want to convince myself I can find it useful for my own work in certain things (e.g. I want to maybe use Bike to run C# code in an SBCL REPL.)
The feedback loop one gets with it is insanely fast, even faster than Python (certainly there are exceptions even with Python ...) That's a blessing and curse for me - the tighter a feedback loop, the harder for me to get out of a problem I am stuck on. :) But, so far, I felt like I can write a thing and not worry about running it ... type a thing, quickly get it running it in a running REPL loop. If a mistake happens, I can fix the issue right there, instead of just a long stack trace. For what it is worth though, I have been doing this in Emacs for a long time (well, for small functions), but didn't think much of it until now.
I have been working with Clojure for 5+ years now. For CLI applications babashka has worked quite well for us.
Would love to know more about the problems you faced.
In my experience whenever I faced such issues - it has been because I am not using it well.
For CLOS kind of things I have found https://github.com/camsaul/methodical library quite well and the performance is better than default multimethods in core clojure implementation.
> spent long, hard hours banging my head against native-image and it just wasn't working out.
it would be nice to know what exactly isn't working out and what the problems with native-image was.
Coz i think clojure is as close to perfect, imho, as a language can go without selling out.
Graalvm native image for Clojure is a solved problem. Just add this library to the project and add a flag to native image command line.
https://github.com/clj-easy/graal-build-time
This initializes Clojure classes at build time and it mostly works for pure Clojure code.
Doing complicated things (e.g. depending on native library, etc.) requires some tweaking. For example, a few packages may need to be declared as initialized at build time or run time, depending what they are doing. And any unresolved classes need to be added to reflection-config.json.
All these are easily discoverable if one talks to people in the Clojurian slack channels. Clojure is a small community, so it helps to be part of it, because there are not a lot of open materials on the Web.
> solved problem
except...
> mostly works
> requires some tweaking
> discoverable if...
I know nothing about Clojure but from your caveats I think I can see why he spent hours banging his head against a wall.
when engineers say it's a solved problem, they mean it in the same way as a mathematician saying a theorem is trivially proved.
> theorem is trivially proved
Reminded me Prof. Knuth trolling in "The Art Of Computer Programming" with an exercise for the reader to prove Fermat's Last Theorem. (:
Look at the hoops OP had to jump through to get SBCL working on Windows. I think Graal would compared favourably with that.
Probably, but that doesn't mean Graal is good; it just means they're both bad.
Compare it to something like Go or Rust where there are no hoops and they're both well supported on Windows and Mac. I haven't actually used it but I believe Zig has very good cross platform support too.
if I recall correctly Rust support for windows still has issues, namely there are a number of Windows specific APIs that are either not well supported or aren’t supported at all.
I could be mistaken or recalling outdated information of course, but that is what I remember from the last time I looked into it
What did you mean? Can't the user just download Portacle and use it?
Problem solved.
Right at the top of the article, the author outlines the requirement was that it must be usable within vim.
Not an issue for Common Lisp, you can use whatever you like, but interacting with a REPL gives you superpowers.
Not really. OP needs to build executables. It is documented here: https://blog.djhaskin.com/blog/release-common-lisp-on-your-f...
I actually just dusted off my old Clojure stuff to see if it was a "solved problem", and it isn't.
I grant that it might be described thus if I started out with that stack, but trying to retrofit an older code base with it is, I have found, next to impossible. You have to code around the myriad gotchas as you go or you're never going to identify all those landmines going back over it after the fact. The errors and bad behaviors are too difficult to identify, even for the `native-image` tooling.
No. I have looked at your code. You did not use the mentioned https://github.com/clj-easy/graal-build-time
If you don't do what everybody is doing to solve a problem, then of course it is not a "solved problem" for you.
No, you don't need to code specifically for native-image. What are the landmines that you need to code around? Since you have not successfully compiled native-image by following common practices, you obviously don't know.
Related, Janet: https://janet-lang.org/
I especially like its github readme and the FAQ there, provides a good amount of context about the project: https://github.com/janet-lang/janet
It's great, but I found the library ecosystem lacking for my particular use cases . The joy web framework, in particular, seems to have stalled in time.
It's mentioned in the article:
> If I had heard about Janet when starting this hunt, I might have stopped there and not gone on to CL. Nice syntax, small, fast executables, C FFI, a fun intro book. It checks all my boxes.
Wondering whether a dialect like Jank [1] may be worth a shot?
Its author is quitting his job to work on it full time: https://jank-lang.org/blog/2025-01-10-i-quit-my-job/
I don't understand the "Requirements Met" section, that reasoning applies to almost any programming language. You chose Common Lisp because there's a JSON library?
A lot of these intermediary languages are not trivial to parse safely and are a vector for exploits. It's not something you can really do on your own unless you're just supporting a specific subset for your application. Even then, you really need to know what you're doing.
There's a section "hunt for new Lisp". It isn't explicitly stated in the requirements maybe because it can be inferred from there that being a Lisp is also one.
Yeah I thought he would go with Rust or Go after seeing those requirements.
Clearly there was another implicit requirement - maybe it had to be a niche language?
Probably had to be a Lisp, considering the OP was coming from Clojure. Rust and Go fail that (unwritten) requirement.
The author says they went about learning CL the wrong way. I wonder if there is a standard "community approved" way of learning the language?
Gentle Introduction to Symbolic Computation is a great book, I learned a lot. The 1990 version available for free below has aged well but you need to look elsewhere for getting setup with emacs and slime or whatever environment you want.
Not standard, but hopefully worth mentioning: the thing that's clicked best for me is the docs on https://ciel-lang.org/ ("batteries included" Common Lisp image). The examples for how to use it's curated libraries matches how I try to integrate a new language into my toolbox.
It hit the front page a while ago too: https://news.ycombinator.com/item?id=41401415
I began learning Common Lisp (CL) from the Common Lisp HyperSpec (CLHS): <https://www.lispworks.com/documentation/HyperSpec/Front/Cont...>. When I began learning CL about two decades ago, I did know of any other easily available source, so CLHS was my only source back then and I think it has served me well.
A popular recommendation these days is Practical Common Lisp (by Peter Seibel): <https://gigamonkeys.com/book/>.
> I wrote this blog post because I noticed that there have been more newcomers on the Common Lisp Discord
Even CL people are using Discord now? People really do seem to love to converge to a single place.
My go-to community for Common Lisp has always been and likely always be the #commonlisp channel on Libera IRC. The community formerly existed on the #lisp channel (if I remember correctly) of Freenode for several decades. It migrated to Libera after a controversial change in Freenode management in May 2021. Webchat link for #commonlisp: https://web.libera.chat/#commonlisp
Further, the on-topic #commonlisp channel on Libera comes with a cozy off-topic channel named #lispcafe for general chit-chat about any imaginable topic. Webchat link for #lispcafe: https://web.libera.chat/#lispcafe
Unfortunately as all the interesting stuff disappears when the server closes. I know it can be remedied, but unfortunately it's not standard. Especially for CL this is crappy as a lot of things are still valid and working 10 years from now but the discord server is long gone.
> Even CL people are using Discord now? People really do seem to love to converge to a single place.
Except Discord servers cannot be described as "a single place" even, as they're all isolated from each other.
Instead of spread out across multiple open IRC networks/channels, developer communities converged into silo'd, closed/proprietary Discord servers. It's a shame.
#lisp and it's offtopic channel #lispcafe at irc://libera.chat are far better.
Also, comp.list at Usenet.
Did you mean comp.lang.lisp ?
Because that's the usual convention, like comp.lang.c, .python, etc.
Cos shipping things is not a concern
Common Lisp is a great choice for many purposes. (And, if you're doing a startup, a fringe language like CL is a good way to find and attract some of the best hackers, and avoid all the Leetcode grunts.)
Just comments on the Scheme/Racket parts...
> I looked at Scheme, but that community seemed to still be fractured over the r6rs/r7rs mess.
Things were fractured before R6RS, with very little portable ecosystem code, and then R6RS didn't solve that, but AFAIK, people got back up on that horse, started over, and have been embracing R7RS.
> Also, there wasn't a big enough ecosystem to suit my liking.
There's a reasonably-sized ecosystem, but three things:
1. Unless things have changed recently, much of the good stuff is still in the package system for a particular implementation (like Racket, "https://pkgs.racket-lang.org/", or Chicken, "http://eggs.call-cc.org/5/").
2. It's nothing the size of Python or JavaScript.
3. Don't believe any claims of "batteries included"; you often have to roll your own basic packages for real-world work. (But this can actually be a blessing, even for startups that have to move fast, depending on what you have to do, and how capable you're willing to rise to be.)
> I'd already tried Racket in school and didn't like it. The runtime was a bit slow and bloated for my tastes.
Although CS professors are some the greatest friends of Scheme (having designed and made much of it), CS professors can also be the worst enemies for real world use of Scheme.
Before Racket, most people who had heard of Scheme knew it only from school, in problem set homework for the dense SICP course, or from whatever pet intro CS textbook their professor wrote. That's a good way to never want to try Scheme again. (And it got worse as many CS departments became optimized Leetcode->FAANG hiring funnels.) Then people never saw Scheme used, nor even described by, real-world programmers, for real-world things.
So Racket (PLT Scheme) comes along, and half of the handful of people thinking of Scheme as a language for real things gravitate to Racket, because they are doing some real-world things. And overall it's one of the best programming languages out there.
But still, although the Racket CS professors include some great programmers, Racket is determined by CS professors, who do PL and CSE research, and write and teach textbooks. So, Racket's use in intro CS classes seems to perpetuate the tradition of CS professors ensuring that students will never again want to touch Scheme after they get a passing grade for the class.
Racket's Chez Scheme backend, which is the default for several years is faster than python and ruby, and raco distribute gets you a smallish package with just your program, portable to the major OSes. Smallish = 40mb, which is probably comparable to other dynamic lands that don't come preinstalled.
I find the "I tried it years ago and it wasn't as nice then, so I'm not gonna look at it again" attitude quote off-putting. It's the same reason Java still gets so much hate after over a decade since Java 8.
I agree that CL and Closure probably has more real world shipped products. However, if you are considering a newer and smaller ecosystem like Janet... you owe it to yourself to look at Racket with fresh eyes.
With the great tooling Common Lisp commercial systems inherit from Lisp Machine and Interlisp-D days, it is kind of sad seeing vim being the option.
Also unless we're about Allegro or LispWorks with their own additions, the Java ecosystem tends to have more choice of mature libraries, I think ASDF isn't kind of spoiled by choice as Maven Central.
But to each their own.
There's Emacs and Slime too OFC. Also, Quicklisp/Ultralisp aren't ASDF, you are pretty outdated...
As someone that knows Emacs since XEmacs glory days, I am surely not outdated regarding Emacs, Slime, and what they still miss from commercial Common Lisp experience.
As for Quicklisp, I may not be up to the latest and greatest, yet I doubt they are at the same level as Maven Central.
I went on a similar journey a couple of years ago and ended up on Gerbil Scheme instead.
If your looking to write CLI utilities in Clojure babashka really is awesome. It doesn't meet the author's standalone binary requirement, but it's got great startup time and comes batteries included with all sorts of helpful tools.
There's also nbb for Node. the REPL is instantaneous, it's almost annoying - sometimes instead of fixing some lopsided state in the REPL, I'd gravitate towards restarting it. And of course there are tons of libraries, well, some of them may not be of the highest grade, but for every mediocre choice there's usually multiple excellent ones. Also it takes some time to get used to everything being async, nevertheless, nbb is a great option for scripts involving e.g., web-scraping or testing with Playwright.
To the extent that Julia is a Lisp (which requires some squinting and handwaving), I wonder how it stacks up against these requirements. With my limited knowledge:
1. Standalone Executables: The biggest current obstacle right away! But I believe this (as in compilation to standalone, small executables) is coming with the next version (Julia 1.12) in an early form, so maybe stabilized and reliable within this year? There does seem to be a lot of momentum in this direction.
2. Vim Workflow: vim-slime works well to my knowledge, and the overall support (eg. treesitter, LSP, etc.) is pretty good, even if VS Code is the "main" supported editor.
3. Windows/Mac/Linux Support: mostly Tier 1 support [https://julialang.org/downloads/#supported_platforms]
4. Larger Imperative Ecosystem: FFI with both C and Python are pretty standard and commonly used.
5. Runtime Speed: Crazy fast as well
6. Multithreading: Base language support is already pretty good, and there's OhMyThreads.jl [1] and data chunking libraries and many other supporting libraries around multithreading.
7. Strong Community: I'd expect Julia and CL communities to be on the same order of magnitude? Complete assumption though, in both directions. Web presence is mostly on the Discourse [2] and Slack, and the JuliaCons are pretty well attended.
8. Ecosystem: Since package management is mentioned, I'll shout out the built-in Pkg package manager, the seamless virtual environment support, and the generally quite good versioning in the ecosystem, all of which add up to a really good experience. As for particular libraries, JSON is the only one I know the answer to: JSON3.jl is a solid, standard choice. I don't know if SQLite.jl [3] would be the recommended option for SQLite or something else, HTTP.jl does the job for HTTP requests but I believe isn't particularly fast or sophisticated, and I could believe there's a subcommunity within Julia that uses "functional data structures" but I wouldn't even know where to look. But, for the ex-Clojurian, may I present Transducers.jl [4] as worth a look?
[1] https://juliafolds2.github.io/OhMyThreads.jl/stable/ [2] https://discourse.julialang.org/ [3] https://github.com/JuliaDatabases/SQLite.jl [4] https://github.com/JuliaFolds2/Transducers.jl
> 6. Multithreading: Base language support is already pretty good, and there's OhMyThreads.jl [1] and data chunking libraries and many other supporting libraries around multithreading.
Agree on the rest, but multithreading in Julia is a let down.
The manual (https://docs.julialang.org/en/v1/manual/parallel-computing/) claims it's composable, but that's only true if you stay within Julia with the paradigm of Tasks. As soon as you interface with C/C++, you get a SegFault, as the Julia runtime expects to adopt [1] all threads calling into it. This is not always viable.
Julia should offer the option of C-style pthreads (or the Windows equivalent) and let others build abstractions on top of them.
[1] And that option was only added recently.
I haven't had to work on the intersection between FFI and multithreading, so I can't comment on that. I was more commenting on the ease-of-use of normal multithreading within a Julia program, and the availability of primitives around it. I've found it much easier to take an existing working program and parallelize it in Julia, than in most other languages I've worked with. (Granted, a lot of this is simply that Julia is a newer language and hence gets to design these into the language from the get-go instead of adding the parts piecemeal over years.)
> I've found it much easier to take an existing working program and parallelize it in Julia, than in most other languages I've worked with.
Unfortunately, this isn't possible when interfacing with proprietary, closed-source binaries. Especially when the C FFI is defined by standard, makes no mention of threading, and each implementation has different quirks.
Multithreading seems to work just fine with OpenBlas. It is also sometimes possible to wrap the underlying state machines from C/C++ code and then making it multithreaded in the Julia side.
None of that contradicts what I wrote. Note that it's calling into Julia from C/C++ that presents problems, the opposite is fine.
Last I checked, Julia actually compiled rather slowly, making development a lot less fluent than a Lisp.
Beyond that, wrapping C code from Julia is neither nicer nor worse than from CL. Wrapping C code is basically done everywhere except for a few outlier languages (Go comes to mind. It IS possible, but it means using cgo, which is its own world).
I liked Julia well enough, but the compile times were slow enough to be painful for me. All the best though to Julia :)
> Wrapping C code is basically done everywhere except for a few outlier languages
Agreed, I only included this because the author mentioned it explicitly as a requirement. From a "allows plugging into some other, large-community imperative language, like Clojure does with Java" perspective - in terms of library access - the combination of having well-polished interfaces to both C and Python is pretty powerful though.
> the compile times were slow enough to be painful for me
I think this, and the developer experience in general (eg. linting, IDE support, etc.), has been the biggest reason the excitement for Julia dampened over time, despite it being a wonderful language in theory. It's been getting better, from "painful" to just "somewhat annoying" for me, but not quickly enough to turn it around (IMHO).
Mfiano wrote about this. https://mfiano.net/posts/2022-09-04-from-common-lisp-to-juli... . (By the last report, mfiano came back to CL.)
Refutation: https://gist.github.com/digikar99/24decb414ddfa15a220b27f674...
food for thought and feel free to chime-in: https://gist.github.com/vindarel/15f4021baad4d22d334cb5ce2be... Common Lisp VS Julia
The "feedback and answer" below the gist already covers a bunch of things I wanted to mention. So I'll skip those and only talk about the rest:
> You can't make it a CLI script, because it compiles the whole code WITH dependencies every time
The "with dependencies" part is mostly untrue nowadays, with constantly better pre-compiled caches for dependencies with every release. The overall issue of compile times getting in the way of interactive development still has some truth to it, but much less than the comment implies.
> https://viralinstruction.com/posts/badjulia/
1. The subheadings in the ToC are mostly based on comparisons with the best-in-class: for eg. "Julia can't easily integrate into other languages [compared to C]", "Weak static analysis [compared to Rust]".
2. Seeing this actually gave me hope about Julia's progress, based on how many of these issues have been largely mitigated or have been actively worked on in the last three years since the post.
3. As a side note, the author of the post is still an active user of and contributor to Julia, so I think this kinda falls under the "There are only two kinds of languages: the ones people complain about and the ones nobody uses" banner. As in, the complaints are there because they like and actively use the language enough to want it to be the best in every way.
> Even though Julia 1.6 was a lot faster than 1.5, it still took too long.
I agree - I think pre-1.9 Julia experience sucked, and overselling the language in that period hurt its reception a lot. (I've mentioned elsewhere in the thread that the developer experience is still one of the weaker points of Julia.)
> (in CL a hello world weighs ±20MB):
In Julia 1.12, with the experimental --trim feature, a hello world is <1MB. Still too early to tell how that'll translate to real programs though.
> false ? 1 : 2 : 3 : 4
This is hilarious and awful at the same time. There's no beating CL in this - I've learnt that every language with syntax unfortunately develops "wat"s like this over time when well-intended syntax features interact.
> A few months ago, I tried to write a program that should receive UDP packets over IPv6 multicast.
> It didn't work. I never figured it out. This works in Java and Python.
> This might be unfair or untrue, but I get the feeling that it doesn't work because no one has seriously tried to use the language this way.
I don't think it's either of those: it seems like networking was and remains a weak area in Julia. For eg., though the language itself is blazingly fast, there have been a bunch of reports about how HTTP traffic performance is several time slower than ostensibly slower languages like Ruby. The reason is probably what the quote says too, there just isn't as much of a userbase or demand for this side of things.
> my packages seem to really like breaking when I try to load them about once a month
There's no source for this one, and no info on what "breaking" means or what the packages do, so I can only say this isn't a common experience. It's very easy to "pin" dependencies to preserve a reproducible state of dependencies and replicate it as well, which is greatly useful in a language used for science.
> I migrated from Lisp to Julia for the ecosystem. It hasn't been worth it from my point of view. I'll migrate back to Lisp eventually. [on a post] about lisp-stats
I'm not very surprised, given the lisp-stats context - it seems to be a common assumption/misconception, because Julia gets compared to Python and R often, that it's a data science and stats focused language. It's great for greenfield stats work, pleasant in many ways compared to those two, but the ecosystem is not particularly focused specifically on it. I'd suggest choosing Julia for the ecosystem if you're doing science science - quantum computing, diffeq modeling, numerical optimization, many others - but on the data science side, what Julia offers is consistent syntax, performance without C reliance, while retaining niceties like DataFrames and Tidier that other languages offer.
Ah, it's this time of the year when we get to fantasize about cool platforms and languages before succumbing back to python, typescript or feeding the relentless AI monster in a major cloud provider.
Typescript is great! Except the JavaScript below it which is terrible, but the static analysis and constraints I can express in typescript are amazing.
It’s actually surprising that there’s no typing discussion in this thread.
Because we are not allowed to program in another language that 'most' programmers can't understand immediately.
We are cursed to use the lowest common denominator of choices of programming.
Common Lisp is great until it isn’t.
Does anyone use Clojure CLR? How is the startup time for that?
[dead]
tl;dr: OP was using a Lisp and they were looking for a different Lisp.
Probably the only reason why anyone would ever pick Common Lisp for a new project in 2025.
Why? Not everyone is resume grifting. It is fast, solid and has excellent developer workflows. Lot of stable (oh no, no updates for 10 years because it just works!) libraries. With CLOG it is a nice secret weapon with people wondering how you managed to move that fast. At least in our experience but we make products so we don't have to explain what it is made of or why.
> no updates for 10 years because it just works!
Maybe I’m crazy, but that is what I like from Lisp
Lmao, my thoughts exactly.
Q: why I chose common lisp
A: I was looking for a lisp to begin with
This is almost like a satire. But I've found it rather funny. It presents question and provides an answer which is contradictory at first but still makes sense.
> Q: why I chose common lisp
> A: I was looking for a lisp to begin with
> This is almost like a satire
How? It's no different than "Why I chose Arch Linux? I was looking for a Linux to begin with."
To even think that's satire is to completely miss the point.
In their defense I can understand going into this article expecting it to be "why I chose Common Lisp [over other programming languages]", rather than "why I chose Common Lisp [over other Lisp dialects]" -- I think most of the HN audience are not people who are often using a Lisp dialect, so their question going in would be "why not use the programming languages I'm more familiar with?"
Meanwhile, "why I chose Arch Linux" is more likely to be interpreted as "...over other types of Linux" because most HN people are already familiar with the decision of "which Linux distro will I use?" But if you gave that headline to someone who doesn't have much familiarity with Linux they'd probably expect the article to address why they chose it over Windows or Mac.
(I don't think "why I chose Common Lisp over other Lisp dialects" is an absurd premise for an article like the person you're replying to does, but I can at least kinda see where they're coming from)
Jank
He cannot use Emacs and then goes to ... Vim ?!?! Nothing against Vim or Emacs, I love both but they had their time which is long gone. I am using Linux ans OSS technolgies since 95 and would have never imagined to advocate a MS product, but just use VS Code. It's awesome. VS Code managed to by-pass the qualitiy and amount of extensions/plugins in a fraction of time Emacs took decades.
VS Code support for Common Lisp is lacking. Alive extension is relatively recent and is a solo effort and thus has significant bugs and is not as feature packed as Vim/Emacs alternatives. For example, it doesn't provide structural editing. It's interaction with sbcl cache seemingly broke my project a few times.
Lots of people work with Vim and Emacs day to day, what makes them "long gone" in your opinion?
I haven't used emacs so I won't speak to that. But a GUI editor (be it Sublime, Notepad++, VSCode, JetBrains, whatever) does everything vim does and is far easier and more pleasant to use. I think that using vim instead of a GUI editor is kind of like using a hand saw instead of power tools - you can do it, but you're willingly giving up a better option for a worse one. Vim made sense in a day when computers were based around text terminals, but we don't live in that day any more and it doesn't make sense to use tools that are limited by that paradigm any more.
For serious work, a GUI editor (Sublime is my choice) beats the pants off vim. The only situation I use a terminal editor is when I'm editing config files on servers, and vim sucks at that too - nano is far superior for quick and dirty edits to files. I simply do not think there's a use case where vim makes sense any more.
Both Vim and Emacs have GUIs. Emacs can even render your PDFs and webpages, you can have svg icons displayed while browsing directories.
Not GP, but I've always found it weird how many people are obsessed with vi/vim and/or Emacs. I get some of the extensibility appeal of Emacs if you're a Lisp fan, but fundamentally I don't understand the appeal of "programming your brain" just to edit code. 90% of my code editing time is spent reading and thinking, not writing or modifying. Memorizing and minimizing (e.g. VimGolf) editor syntax seems like a massive waste of time and cognitive function to me. Modern IDEs have you up and running instantly, and their refactoring tools are really amazing.
I feel like there's been a boom in "editor hipsterism" in the last 10 - 15 years, while everyone has forgotten the variety of novel editors that were made in the 80s and 90s (I've forgotten them, too, I just remember seeing ads and reviews in magazines as a young programmer).
For context, I do have a basic understanding of vim because I run it on servers, but my knowledge doesn't go far beyond search and replace.
Emacs provides far more than just editing. Helps a lot with reading and VC (magit). Just magit would IMHO justify Emacs.
I like vim because the keybindings are familiar everywhere. For small server stuff I use vim, for most coding I use Doom Emacs (vim keybindings), and for Java I use Intellij with vim keybindings.
I mostly use Emacs because of org mode. It's way better than anything else trying to fill this hole. Otherwise I'd probably just use VSCode. But I don't want to add yet another editor to my regular use.
To each their own. With Vim, Unix is my IDE. I don't know about the recent interest in these editors that you mention. I've been using vi/Vim for the past 30 years. I take it to every project and job. My fingers already know what to do. I've watched colleagues who I started working with 20 years ago as they've retooled on the latest hotness every 4-5 years. Visual Studio, Netbeans, Eclipse, Jetbrains, VS Code, etc. It doesn't take long to learn to use a new IDE, but they are definitely shorter term investments.
I can do more or less the same thing most folks can with an IDE; I just use external tools. I wouldn't claim that Vim is somehow superior. It's just what I use. Every now and then, I noodle a bit on a personal editor that is to ed what Vim is to vi. At some point, I'll migrate to it.
I think there is a bit of a different philosophy that the editor folks have. I can't speak for them, but I can speak for me. I like to feel closer to the code base. I like to have more of it in my head. The best analogy I've found is that using an editor like Vim or Emacs is closer to driving with a manual transmission and with tight steering controls, compared to driving with an automatic transmission with partial self-driving features found in modern cars. There is definitely something to be said about things like adaptive cruise control, lane keeping assist, GPS navigation, etc. But, if you talk to a manual transmission enthusiast, there is a thrill of feeling closer to the road and being more engaged. Both folks arrive at the destination in the same amount of time. But, if you ask each about their experience, they will have much different views of the drive organized in their head.
To each their own.
And yet I get down-voted for expressing a well-reasoned opinion against vim and Emacs.
I've been using vi/Vim for the past 30 years. I take it to every project and job.
I've rarely used an IDE that didn't allow custom key bindings, often with the ability to select a set from a drop-down list to match other IDEs. I've been using mostly the same keyboard shortcuts across IDEs for over 20 years.
if you talk to a manual transmission enthusiast, there is a thrill of feeling closer to the road and being more engaged
Funny you should say that, because I regularly enrage these types by pointing out that if they can't stay engaged as a driver with an automatic transmission, then the problem is with them, not the car. This is a quasi-religious ritual with these people, and a very low-effort way to get a sense of superiority over others (i.e. every driver on the road before ~1970 had experience with a manual transmission and literally anyone can learn in a few hours. It's not a skill to be proud of).
and a very low-effort way to get a sense of superiority over others... literally anyone can learn in a few hours.
I agree that it is a skill that is easy to learn. The same is true of IDEs. This isn't about skill or superiority, but comfort. Some folks are more comfortable being closer to the machine or the road, as it were. Others are more comfortable having some automation between them and the machine. I think that the better to consider this a matter of personal preference.
The IDE adds a layer of abstraction, and abstraction can be leaky. If you are comfortable with the abstraction, and with the opinionated choices the IDE makes, that's fine. If you are not, that's also fine. All that I ask when I'm bootstrapping a project with a team is that projects be arranged such that they are IDE / editor agnostic. Use standard build configuration / build tools that have appropriate plugins for IDEs, and can also be run in the terminal / command-line. Then, the individual developer can choose to use whichever editor or IDE that developer is comfortable using.
i prefer to use the mouse as little as possible, i feel more productive when I can stay on the home-row of the keyboard, that is primarily it for me. This is because hotkeys are more direct, exact and easier to memorize than mouse motions
it helps that vim bindings are adopted in many places so learning and using them ports well to browsing and even managing windows (vimium and aerospace respectively)
secondarily, while i don't think using the terminal is generally better than GUI I tend to work in the terminal anyway, so keeping text editing there makes sense.
> I've always found it weird how many people are obsessed with vi/vim and/or Emacs.
Because you've never truly done it. Like someone who has seen all three sides, I can tell you this: I have never, even once, even for a second, ever regretted my time invested in learning Vim and Emacs. Vim is hands-down the best mental model for navigating through text - I use it everywhere - in my editor, in my terminal, in my browser; heck, I use it system-wide - in my window manager. It's immensely empowering - being able to control things without losing context - your fingertips are in control of everything, you don't even need to shift your hand to touch the mouse or arrow keys. It also liberates you from learning myriad key combinations for every single app, it gives you freedom from having to learn, remember and having to perform weird dactylar dance, where sometimes you can't even reach the exact keys without looking down at your keyboard, not to mention the ergonomics.
And then Emacs. OMG, Emacs is so amazing, you just have no idea. The things you can do in Emacs are hard to describe in words - you just need to see it.
> 90% of my code editing time is spent reading and thinking, not writing or modifying
I spent most of my time taking notes. Emacs is the best tool for that. Matter of fact, I find Emacs is the best tool for any kind of text manipulation. I don't even type anything longer than three words in any app anymore. I'm typing this exact comment in Emacs right now. Why wouldn't I? I have all the tools I need at my disposal - spellchecking, dictionaries, translation, etymology and definition lookup, access to various LLMs - chatgpt, claude, ollama, perplexity, and others, search engines - here's a real, practical example: I would type a search query once and it sends requests to Google, Wikipedia, GitHub, YouTube, etc. I then can pick up the YouTube url, open the video and control its playback while typing - I can pause, mute, resume, speed up the video. All that with the emphasis of the main task at hand - taking notes. Done without leaving the window where the notes are being typed, without having to switch your focus - your mind remains "in the zone". I'm telling you - that's some blackmagic fuckery for staying productive and happy. It's enormously fun when you have complete control over the things happening on your screen.
> I've always found it weird
There's nothing truly "weird" about it. If you are a computer programmer, you do want to be in control of the computing happening on your computer. It's rather weird when there's the opposite - when computer programmers become merely "users", when they are told that "you're holding it wrong" and "users don't know what they want". I for one do exactly what I want - I want the shit on my computer to work and work on my terms, not anyone else's.
With vim you run factoring tools as an external tools.
Massive wasting of time? with vim you can do something in seconds that with an IDE you would last minutes if not ours.
Check out:
- entr to run commands on modifying files/directories
- plain Makefiles to run your code: git://bitreich.org/english_knight
- LSP and alike tools for your language
Massive wasting of time?
I feel like you only read half of that sentence.
entr to run commands on modifying files/directories
Alt-Tab to the command console that I always have running.
plain Makefiles to run your code
I have no idea what the advantage is here. F5 to run my code, including scripted deployment.
LSP and alike tools for your language
I don't know what this means.
Kind of weird to compare a sluggish, bug ridden, Javascript application to vim, no?
Same with emacs, now that they've spent some time on performance.
VSCode sits in a weird limbo. It's not an IDE, and it's not an excellent editor. The plugins are usually rudimentary but there's a lot of them. There's no community, instead there's one of the nastiest corporations on the planet faking one.
> they had their time which is long gone
Haha, yeah, sure, but of course, no! Similar shit has been said so many times since 1990s. Yet both Vim and Emacs still have vibrant communities, have dedicated conferences, they get mentioned almost every week - here on HN, and every day on Reddit.
Emacs, in experienced hands, absolutely kicks everything out of the ballpark; it's just hands-down the best tool with unmatched text manipulation capabilities. Anyone who says otherwise simply is unaware what you can do in Emacs.
Can anyone in the grand community of VSCode users claim to have a workflow that involves:
- Reading a pdf where the colors match the current color scheme? The scheme that automatically adjusts the colors based on time of the day (because Emacs has built-in solar and lunar calendars)?
- Where they do annotate the said pdf in their notes, where you can jump to the places in pdf from the notes and vice-versa? Where you can scroll the pdf, without even having to switch windows, because you're in the middle of typing?
- Where you can open a video and control its playback - pausing and resuming it in place, directly from your editor, whilst typing?
- Where you also extract subtitles and copy some text chunks for your notes? Where you can run LLM to extract summary for your notes of the said transcript?
- Where you can resume the video-playback at some position in the transcript? Where you can watch the video and chunks of the transcript text get automatically highlighted - the karaoke style?
- Where you can simply type 'RFC-xxx' and despite that being a plain text entry, Emacs intelligently recognizes what that is and lets you browse the RFC entry, in-place, without even googling for it? Or similarly have plain-text of e.g., 'myorg/foo#34' and browse that Pull-Request and even perform the review with diffs and everything?
- Speaking of googling, can you type a search query only once and let it run through different places, finding things in Google, YouTube, Wikipedia, DuckDuckGo, GitHub, your browser's history and personal emails? Or any other places, since it's highly configurable?
- Do you use translation, dictionaries, thesaurus, etymology and definition lookup for any words and phrases, in the midst of typing? I have bound "auto-correct previous typo" to a double tap of the comma key - it's super convenient. Can you do something like that in VSCode easily?
- Do you edit code comments and docstrings in the code, treating them as markdown - with all the syntax highlighting, preview, and other perks?
- Do you have embedded LaTeX formulas directly in your notes?
And that's just a tiny fraction of things I personally do in Emacs - it's just the tip of the iceberg. There are tons of other interesting and highly pragmatic Emacs packages people use for various kinds of tasks. Speaking of packages - my config contains over 300 different Emacs packages, and I still can restart and load it under a second. Can you imagine any VS Code user having installed even half of that many plugins? Would that still be a "workable" environment?
Thanks, but no thanks.
I don't like vscode extensions advertising to me every 5 seconds, auto-downgrading the free versions of extensions, auto-installing aux tools every 5 seconds, having a 400 MB RSS chromium runtime (remember Eight Megabytes And Constantly Swapping? VS code is much worse; and it's also just a plain text editor); nerfing the .net debugger and breaking hot reload on purpose in VSCodium; telemetry, .... it's so noisy all the time. You are using this? On purpose?!
VS code is basically the same idea as emacs, just the MVP variant and with a lot of questionable technology choices (Javascript? Electron? Then emulate terminal cells anyway and manually copy cell contents? uhhh. What is this? Retrofuturism?) and done with the usual Microsoft Embrace-Extend-Extinguish tactics (nerfing pylance, funny license terms on some extensions that the extensions are only allowed to be used in their vscode etc).
So if you didn't like emacs you probably wouldn't like vscode either.
Also, if you use anything BUT emacs for Lisp development, what do you use that doesn't have a jarring break between the Lisp image and you? vim seems weird for that use case :)
emacs is very very good for Lisp development.
On the other hand, VSCode for Lisp is very flaky and VSCode regularily breaks your Lisp projects. Did you try it?
Because of your comment I tried VSCode again and now about 20 extensions (one of them "Alive", a Lisp extension for vscode) complain about now missing
"Dev container: Docker from Docker Compose"
(keep in mind they worked before and I didn't change anything in vscode--I hadn't even run VSCode for 8 months or so) and when I try to fix that by clicking on the message in the extension manager the message immediately disappears from all 20 extensions in the manager (WTF?) and I get:>>./logs/20250112T181356/window1/exthost/ms-vscode-remote.remote-containers/remoteContainers-2025-01-12T17-13-58.234Z.log: >>>> Executing external compose provider "/home/dannym/.guix-home/profile/bin/podman-compose". Please see podman-compose(1) for how to disable this message. <<<< >a239310d8b933dc85cc7671d2c90a75580fc57a309905298170eac4e7618d0c1 >Error: statfs /var/run/docker.sock: no such file or directory >Error: no container with name or ID "serverdevcontainer_app_1" found: no such container
... because it's using podman (I didn't configure that--vscode did that on its own, incompletely. Also, it thinks that means having a docker/podman service running as root has to be a thing then (instead of rootless podman). Funny thing is I use podman extensively. I don't wanna know how bad it would be if I HADN'T set podman up already).
So it didn't actually fix anything, but it removed the error message. I see. And there's no REPL for the editor--so I can't actually find out details, let alone fix anything.
I had thought emacs DX was bad--but I've revised my opinion now: compared to vscode DX, emacs DX is great. You live with VSCode if you want to.
And note, vscode was made after emacs was made. There's no excuse for this.
I think this now was about all the time that I want to waste on this thing, again.
How is this a problem in 2025? shakes head
>VS Code managed to by-pass the qualitiy and amount of extensions/plugins in a fraction of time Emacs took decades.
Yeah? Seems to me these vscode extensions are written in crayon. Bad quality like that would never make it into emacs mainline. And it's not even strictly about that! I wonder who would write a developer tool that the developer can't easily debug its own extensions in (yes, I know about Ctrl-Shift-P). That flies about as well as a lead balloon.
For comparison, there's emacs bufferenv that does dev containerization like this and it works just fine. Configuration: 1 line--the names of the containerfiles one wants it to pick up. Also, if I wanted to debug what it did (which is rare) I could just evaluate any expression whatsoever in emacs. ("Alt-ESC : «expression»" anywhere)
PS. manually running "podman-compose up" in an example project as a regular user works just fine--starts up the project and everything needed. So what are they overcomplicating here? Pipes too hard?
PPS. I've read some blog article to make socket activation work for rootless podman[1] but it's not really talking about vscode. Instead, it talks how one would set up "linger" so that the container stays there when I'm logged out. So that's not for dev containers (why would I possibly want that there? I'm not ensuring Heisenbugs myself :P).
[1] https://github.com/containers/podman/blob/main/docs/tutorial...