A Love Letter to FreeBSD
tara.sh423 points by rbanffy 2 days ago
423 points by rbanffy 2 days ago
26 years of FreeBSD and counting...
IIRC in about 99 I got sick of Mandrake and RH RPM deps hell and found FreeBSD 3 CD in a Walnut creek book. Ports and BSD packages were a revelation, to say nothing of the documentation which still sets it apart from the haphazard Linux.
The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.
Currently I run FreeBSD on several home machines including old mac minis repurposed as media machines throughout the house.
They run kodi + linux brave and with that I can stream anything like live sports.
Also OpenBSD for one firewall and PFSense (FreeBSD) for another.
> The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.
I completely agree.
Supermicro mobo's with server-grade components combined with aggressive cooling fans/heat sinks running FreeBSD in a AAA data center resulted in two prod servers having uptimes of over 3000+ days. This included dozens of app/jails/ports updates (pretty much everything other than the kernel).
Back when I was a sysadmin (sort of 2007-2010), the preference of a colleague (RIP AJG...) who ran a lot of things before my time at the org, was FreeBSD, and I quickly understood why. We ran Postgres on 6.x as a db for a large Jira instance, while Jira itself ran on Linux iirc because I went with jrockit that ran circles around any JVM at the time. Those Postgres boxes had many years of uptime, locked away in a small colo facility, never failed and outlived the org that got merged and chopped up. FreeBSD was just so snappy, and just kept going. At the same time I ran ZFS on FreeBSD as our main file store for NFS and whatnot, snapshots, send/recv replication and all.
And it was all indeed on Supermicro server hardware.
And in parallel, while our routing kit was mostly Cisco, I put a transparent bridging firewall in front of the network running pfSense 1.2 or 1.3. It was one of those embedded boxes running a Via C3/Nehemiah, that had the Via Padlock crypto engine that pfSense supported. Its AES256 performance blew away our Xeons and crypto accelerator cards in our midrange Cisco ISRs - cards costing more than that C3 box. It had a failsafe Ethernet passthrough for when power went down and it ran FreeBSD. I've been using pfSense ever since, commercialisation / Netgate aside, force of habit.
And although for some things I lean towards OpenBSD today, FreeBSD delivers, and it has for nearly 20 years for me. And, as they say, it should for you, too.
> uptimes of over 3000+ days
Oof, that sounds scary. I’ve come to view high uptime as dangerous… it’s a sign you haven’t rebooted the thing enough to know what even happens on reboot (will everything come back up? Is the system currently relying on a process that only happens to be running because someone started it manually? Etc)
Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.
>> uptimes of over 3000+ days
> Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.
the only thing we have to fear is fear itself[0]
Worrying about critical process(es) being started manually which will not be restarted if a server is rebooted has the same risk as those same process(es) crashing while the server is operational. Best practice is to leverage the builtin support for "Managing Services in FreeBSD"[1] for deployment-specific critical process(es).Now if there is a rogue person which fires up a daemon[2] manually instead of following the above, then there are bigger problems in the organization than what happens if a server is rebooted.
0 - https://www.gilderlehrman.org/history-resources/spotlight-pr...
1 - https://docs.freebsd.org/en/books/handbook/config/#configtun...
2 - https://docs.freebsd.org/en/books/handbook/basics/#basics-pr...
Depends how they are built. There are many embedded/real-time systems that expect this sort of reliability too of course.
I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.
So if we are talking about garbage windows servers sure. It's just a question of what is accepted by the customers/users.
> I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.
Tandem servers were legendary for their reliability. I knew h/w support engineers years ago that told me stories like your recounting being able to pull components (such as CPU's) without affecting system availability.
I still remember AJG vividly to this day. He also once told me he was a FreeBSD contributor.
My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed. I tried Red Hat 9, and it was slow as a snail by comparison. For me, the choice was obvious. Later on I was running FreeBSD on my ThinkPad, and I still remember the days of coding on it using my professor's linear/non-linear optimisation library, sorting out wlan driver and firmware to use the library wifi, and compiling Mozilla on my way home while the laptop was in my backpack. My personal record: I never messed up a single FreeBSD install, even when I was completely drunk.
Even later, I needed to monitor the CPU and memory usage of our performance/latency critical code. The POSIX API worked out of the box on FreeBSD and Solaris exactly as documented. Linux? Nope. I had to resort to parsing /proc myself, and what a mess it was. The structure was inconsistent, and even within the same kernel minor version the behaviour could change. Sometimes a process's CPU time included all its threads, and sometimes it didn't.
To this day, I still tell people that FreeBSD (and the other BSDs) feels like a proper operating system, and GNU/Linux feels like a toy.
> My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed.
Wow, this brings back some memories. I remember being on a gig which mandated locked-down Windows laptops, but VMWare was authorized.
So I fired up FreeBSD inside VMWare running X with fluxbox[0] as the window manager. Even with multiple rxvt terminals and Firefox running, the memory used by VMWare was less than MS-Word with a single empty document!
All hail the mighty Wombats!
The "completely drunk" comment made me chuckle, too familiar... poor choices, but good times!
This is more about OpenBSD, but worth mentioning that nicm of tmux fame also worked with us in the same little office, in a strange little town.
AJG also made some contributions to Postgres, and wrote a beautiful, full-featured web editor for BIND DNS records, which, sadly, faded along with him and was eventually lost to time along with his domain, tcpd.net, that has since expired and was taken over.
It really is amazing how much success Linux has achieved given its relatively haphazard nature.
FreeBSD always has been, and always will be, my favorite OS.
It is so much more coherent and considered, as the post author points out. It is cohesive; whole.
> It really is amazing how much success Linux has achieved given its relatively haphazard nature.
That haphazard nature is probably part of the reason for its success, since it allowed for many alternative ways of doing things being experimented in parallel.
That was my impression from diving into The Design & Implementation of the FreeBSD Operating System. I really need to devote time to running it long term.
Really great book. Among other things, I think it's the best explanation of ZFS I've seen in print.
Linux has turned haphazardry into a strength. This is impressive.
I prefer FreeBSD.
I like the haphazardry but I think systemd veered too far into dadaism.
THIS. As bad as launchctl on Macs. Solution looking for a problem so it causes more problems -- like IPv6
> Solution looking for a problem
Two clear problems with the init system (https://en.wikipedia.org/wiki/Init) are
- it doesn’t handle parallel startup of services (sysadmins can tweak their init scripts to speed up booting, but init doesn’t provide any assistance)
- it does not work in a world where devices get attached to and detached from computers all the time (think of USB and Bluetooth devices, WiFi networks).
The second problem was evolutionary solved in init systems by having multiple daemons doing, basically, the same thing: listen for device attachments/detachments, and handling them. Unifying that in a single daemon, IMO, is a good thing. If you accept that, making that single daemon the init process makes sense, too, as it will give you a solution for the first problem.
Yes, ”a solution”. We need a thing. Systemd is a thing. Therefore, we need systemd.
Not to get into a flame war, but 99% of my issues with systemd is that they didn't just replace init, but NTP, DHCP, logging (this one is arguably necessary, but they made it complicated, especially if you want to send logs to a centralized remote location or use another utility to view logs), etc. It broke the fundamental historical concept of unix: do one thing very well.
To make things worse, the opinionated nature of systemd's founder (Lennart Poettering) has meant many a sysadmin has had to fight with it in real-world usage (eg systemd-timesyncd's SNTP client not handling drift very well or systemd-networkd not handling real world DHCP fields). His responses "Don't use a computer with a clock that drifts" or "we're not supporting a non-standard field that the majority of DHCP servers use" just don't jive in the real world. The result was going to be ugly. It's not surprising that most distros ended up bundling chrony, etc.
> (this one is arguably necessary, but they made it complicated, especially if you want to send logs to a centralized remote location or use another utility to view logs)
It is not complicated at all. Recent enough versions of systemd support journal forwarding, but even without it, configuring rsyslog is extremely easy:
1. Install rsyslog
2. Create a file /etc/rsyslog.d/forwarding.conf
$ActionForwardDefaultTemplate RSYSLOG_ForwardFormat
*.* @@${your-syslog-server-here}:514
3. Restart rsyslog4. Profit.
You can't be serious thinking that IPv4 doesn't have problems
Of course not.
But IPv6 is not the solution to Ipv4's issues at all.
IPv6 is something completely different justified post-facto with EMOTIONAL arguments ie. You are stealing the last IPv4 address from the children!
- Dual stack -- unnecessary and bloated - Performance = 4x worse or more - No NAT or private networks -- not in the same sense. People love to hate on NAT but I do not want my toaster on the internet with a unique hardware serial number. - Hardware tracking built into the protocol -- the mitigations offered are BS. - Addresses are a congintive block - Forces people to use DNS (central) which acts as a censorship choke point.
All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc
I actually wrote a code project that implements this network as an overlay -- but it's not ready to share yet. Works though.
If I were to imagine my self in the room deciding on the IPv6 requirements I expect the key one was 'track every person and every device every where all the time' because if you are just trying to expand the address space then IPv6 is way way way overkill -- it's overkill even for future proofing for the next 1000 years of all that privacy invading.
> All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc
That is what we have in ipv6. What you write sounds good/easy on paper, but when you look at how networks are really implemented you realize it is impossible to do that. Networks packets have to obey the laws of bits and bytes and there isn't any place to put that extra 0 in ipv4: no matter what you have to create a new ipv6. They did write a standard for how to send ipv4 addresses in ipv6, but anyone who doesn't have ipv6 themselves can't use that and so we must dual stack until everyone transitions.
Actually there is a place to put it... I didn't want to get into this but since you asked:
My prototype/thought experiment is called IPv40 a 40bit extension to IPv4.
IPv40 addresses are carried over Legacy networks using the IPv4 Options Field (Type 35)
Legacy routers ignore Option 35 and route based on the 32-bit destination (effectively forcing traffic to "Space 0". IPv40-aware routers parse Option 35 to switch Universes.
This works right now but as a software overlay not in hardware.
Just my programming/thought experiment which was pretty fun.
When solutions are pushed top down like IPv6 my spider sense tingles -- what problem is it solving? the answers are NOT 'to address address space limitations of IPv4' that is the marketing and if you challenge it you will be met with ad hominen attacks and emotional manipulations.