I love how easy it is to be a part of FreeBSD:
Open a account at https://bugs.freebsd.org/bugzilla/
Go to https://portscout.freebsd.org/ and find your outdated port (or port without maintainer ([email protected]))
Update port (makefile) open a bugreport add your diff and that's it...or ask to take additionally maintainership of that port.
I was introduced to FreeBSD (v3.3) in the late 90s by /user?id=gjvc. I bought the CD set and the FreeBSD Handbook in paperback format from The FreeBSD Mall.
I was too young to appreciate it back then, but now in my mid-40s I find myself hankering back to those early days for me. It's a shame that some cloud providers like DigitalOcean and Hetzner have dropped native support for FreeBSD as base operating systems for their VPSes. I think this release will be the turning point for me getting back into FreeBSD after too many years away.
Thanks to the FreeBSD release team!
Yup.. FreeBSD was awesome becausethe FreeBSD handbook has always been top notch. It covers everything you need to install and administer FreeBSD + many of its packages
Talking about hetzner, they write you an image on a USB-Stick and put it in your Server (at no cost). Since it's a real server i don't need any "native" support from them. Otherwise Oracle-Cloud or Vultr.
But you are right, it's sad that hetzner dropped the "webinstall no hands-on" support.
Azure still offers them,
Once I opened a ticket to complain that it is listed under "Linux custom images", however the documentation team decided my complaint was withouth reasoning(!).
My early FreeBSD moment was when I had a cable modem, and you were able to download one or two boot floppies. After you booted them up, you could install the entire OS from the network. No CD needed.
I assume it just downloaded everything straight from FTP servers.
https://tornadovps.com/ (formerly prgmr) has first-class FreeBSD support. I've been with them since 2011.
I run freebsd on a hetzner cloud vps, don’t remember how exactly I did it but I think I uploaded the install medium to the server console. Wasn’t too much hassle iirc.
The problem with FreeBSD is that it couldn't keep up. No containers or VMs (jails and their homegrown HV don't cut it), fewer drivers, etc. It's great for some server use cases but I just couldn't do my work in FreeBSD. I did like it though, the docs and community are great.
If it's a FreeBSD VPS you're after, I'd suggest you give UpCloud a chance. I'm currently running a few FreeBSD VPSs on UpCloud and I have not run into any issues. It's kinda great!
You can easily fool DigitalOcean into running FreeBSD with custom images \o/
From the release notes, it appears this may be the last release with i386 / 32-bit Intel x86 (as well as 32-bit armv6 and PowerPC) support.
“FreeBSD 15.0 is not expected to include support for 32-bit platforms other than armv7. The armv6, i386, and powerpc platforms are deprecated and will be removed. 64-bit systems will still be able to run older 32-bit binaries.“
Probably 14.3 will be the last release with i386. But yes, 14.x will be the last major branch with i386.
Surprised that armv7 will be getting 32bit support but not x86. I know arm is huge, but it's platform support is also fragmented compared to an x86 box. Can anyone share some more info on this?
Also surprised they are cutting Power. That is one of the 4 platforms RHEL supports.
FreeBSD 14.0 Release Information - 38291436 - Nov 2023 (6 comments)
FreeBSD 14.0 has reached – RELEASE - 38219578 - Nov 2023 (93 comments)
FreeBSD 14.0-RC1 Now Available - 37881293 - Oct 2023 (17 comments)
FreeBSD 14.0-BETA2 Now Available - 37532706 - Sept 2023 (7 comments)
I think a lot of the work for serving 800Gbps of TLS encrypted traffic from Netflix landed on FreeBSD 14.
Cant wait to see if they are doing 1600Gbps.
IIRC the limit right now is per CPU socket memory bandwidth and inter-socket bandwidth. There just isn't enough bandwidth available to treat dual socket Xeon or EPYC systems as a single node ans the networks colocating their appliances aren't able to steer connections to the NICs in the same NUMA domain as the NVMe storage holding the data users want.
When will the torrents be created and released?
EDIT: Looks like they're up now!
> FreeBSD supports up to 1024 cores on the amd64 and arm64 platforms.
Sounds pretty future proofed unless I'm missing a x86 processor out there that does this
If you combine multiple CPU sockets, you could get there. EPYC, IIRC, supports up to 64 cores per chip, so if you build a machine with 16 sockets, you get 1024 cores. To my knowledge, no such machine exists today, but HPE offers (or used to, anyway) a machine with 32 Xeon chips, so its core count could well reach several hundred. (I may or may not be drooling at the thought.)
> up to 1024
Curious where this (rather large, yet still seemingly arbitrary) limit comes from.
"supports up to" doesn't have to mean "works well/optimally with".
I tried out FreeBSD and loved it, between the documentation, cohesion, and the ports system.
Unfortunately, I need Docker for work on a few different projects -- one for Supabase migrations, and another project that's orchestrated (in development too) via docker-compose.
Highly recommend it otherwise.
You could run Docker in a Linux VM, which is what Docker Desktop does anyway. FreeBSD has Bhyve for this.
I've been running FreeBSD for homelab stuff for years now and the documentation is a huge pain point IMO. The handbook is okay, but beyond that it's pretty poor.
E.g. every single major upgrade in recent memory has shat the bed. There's always a new reason, at one point it's because I rolled past the 3AM deadline and the periodic scripts absolutely fucked freebsd-update. So this time around I thought it'd be nice to script the 13.x install so I'd have a nice repeatable process.
Except the documentation around unattended installs still references sysinstall (which was replaced eons ago) in some parts. After quite a lot of digging I realized the automation story is "roll your own ISO". Nothing that even comes close to kickstart or quickstart in Linux land (geee no wonder AWS adoption is fairly low).
So I dug into some stuff that would've made automated installs from a stock ISO easier, got a proof of concept working and fired off an email to one of the names on the current installer (which is still missing features from sysinstall!). And that's where the story ends. I'm ready to get off of this train, and were it not for ZFS I would've already bailed.
Don't get me started on the ports tree.
I would not run FreeBSD in a production environment without a good reason. If you're already tied to docker that's a great reason to stay with Linux.
I want to love FreeBSD, but there are some things I wished were easier. Like getting the firewall pf set up. When I install Debian with ufw I get a really nice starting ruleset that works well with IPv6 and good ICMP filtering etc. With FreeBSD I was confused for awhile about how to get IPv6 working with (the very powerful) pf, which you have to write a config file completely from scratch for. I was left with a lot of suggestions and snippets but struggling to dig through the man pages and set all the complex rules for which types of ICMP messages to filter, etc. I wish there was an easier way to get going with the firewall with a good ready-made pf.conf file for a web server that works well with IPv6. Yes the power and easy customability of pf is great. But for many users who aren't network experts, some nice, accepted starting templates would be great.
"Like getting the firewall pf set up"
pfSense 2.7.0 is FreeBSD 14 based already and 2.7.1 was released todayish. You could try tearing their scripts apart to see what's what but bear in mind that pfSense is designed to be a router/firewall not a host based firewall, which sounds like what you really want.
It sounds like you want ufw or firewalld for FreeBSD. No idea if it exists and I am well passed DIY - I had custom scripts for ipfw, ipchains and iptables on Linux and then gave up. I don't use FreeBSD on the desktop but if I did ...
or keep it simple:
You mention a web server. I suggest you keep the host firewall simple, this is in pseudo code:
allow ssh from LAN
allow monitoring_ip to monitoring_ports
drop blocklist_ips to ALL
allow https from ALL to webserver_ip
If you can, consider deploying multiple VLANs. This does raise the technical bar somewhat! Host firewalls are just as good for small setups. Decide on what your security requirements really are and work on from there. I will grant you that is quite tricky for the uninitiated but keep asking questions and ducking the inevitable "RTFM" style answers from entitled numbskulls and you will get there.
Good luck 8)
Well there are some examples:
But yeah that pf.conf could be expanded allot, but there are many source to cobble a conf together. My conf is massive but 99.9% commented out so i have my "template" for nearly everything, from mail to web to blacklistd etc.
It's been forever since I really played with firewalls, but I remember pf being much more thought-out than iptables.
I'm a fairly new FreeBSD user and this will be one of my first major upgrade. What should I be aware of when performing major upgrades? On Linux, I would avoid it and just start from a clean system. Curious what more experienced users thoughts are.
With care - and upgrading a test system or two first to re-familiarize myself - it's worked fine for me. Two tips:
- Understand `gpart bootcode` (or equivalents), and be really sure that your low-level bootcode gets upgraded.
- If you're running zfs, `zpool checkpoint` can give you a way to rewind the entire state of the pool to a prior point. Used with some care, it can be extremely useful. Or just reassuring, as your "Plan B".
I've had some issues with the opensmtpd port when upgrading from 13 to 14 which is probably due to the openssl upgrade. Apart from that the upgrade process tends to be pretty simple the updater warns you of any potential problems. As with any upgrade take a backup first so you can restore to 13 if you need to.
I just upgraded to FreeBSD 14 from 13 and it was fairly clean. It asks you to evaluate diffs for a few config file changes. The only issue I had was that sudo was uninstalled for some reason so I had to go back to my server console, go in as root and re-install sudo.
If you're talking about a home lab or workstation environment I would 100% back everything up and start from a clean system. Go back to FreeBSD 9 or 10, in-place upgrades have consistently bit me in the ass. Installer getting wedged, problems upgrading the boot loader, disk labels, freebsd-update getting wedged, you name it.
If you're talking about a production environment, ideally your machines are more-or-less immutable anyways in which case you wouldn't be upgrading in place anyhow.
Compared to RHEL, CentOS, FreeBSD upgrading is a breeze. Debian is easiest. I have done like 100 different upgrades of FreeBSD and most has been without any issues.
If you use Jails, you have to upgrade them also, there are guides for this.
Finally FreeBSD has fast WiFi?
"WiFi 6 support has been added to wpa (wpa_supplicant(8) and hostapd(8)). c1d255d3ffdb 3968b47cd974 bd452dcbede6"
> The iwlwifi(4) driver for Intel wireless interfaces has been updated to the latest version, supporting chipsets up to WiFi 6E AX411/AX211/AX210, and with preparations for upcoming BX and SC chipsets. (Sponsored by The FreeBSD Foundation)
Yes, but only if your card's driver does too. Mine uses iwm , which makes me sad:
>Currently, iwm only supports 802.11b and 802.11g modes. It will not associate to access points that are configured to operate only in 802.11n or 802.11ac modes.
Thankfully, 802.11a seems to work, so I can use my 5 GHz radio. But it's not fast.
Thanks to everyone who made FreeBSD possible! Cheers!!
Full release notes at:
*  https://www.freebsd.org/releases/14.0R/announce/
I hope we eventually get .NET support for FreeBSD
I have two linux VMs that run .NET processes that I'd love to put in a jail.
I have always been interested in FreeBSD, and it seems my ideal environment is easy enough to get going on *BSD (Xorg w/custom DWM) but I have never been able to pull it off. My machine wasn't able to boot FreeBSD for some reason
RACK? No mention of RACK or BBR. I thought the kld was being enabled by default in this release cycle.
or is this "old news" and it was rolled into an older release?
See "Request for Testing: TCP RACK" at:
tcp_rack(4) has been available since FreeBSD 13.0, just not the default:
An article from 2021:
* 2021 Discussion: 28549370
If you really want BBR, you can build a custom kernel: https://www.linkedin.com/pulse/frebsd-13-tcp-bbr-congestion-...
Congratulations! Here's a summary of the highlights from the release announcement :
- OpenSSH has been updated to version 9.5p1.
- OpenSSL has been updated to version 3.0.12, a major upgrade from OpenSSL 1.1.1t in FreeBSD 13.2-RELEASE.
- The bhyve hypervisor now supports TPM and GPU passthrough.
- FreeBSD supports up to 1024 cores on the amd64 and arm64 platforms.
- ZFS has been upgraded to OpenZFS release 2.2, providing significant performance improvements.
- It is now possible to perform background filesystem checks on UFS file systems running with journaled soft updates.
- Experimental ZFS images are now available for AWS and Azure.
- The default congestion control mechanism for TCP is now CUBIC.
> - ZFS has been upgraded to OpenZFS release 2.2, providing significant performance improvements.
Post-2.2 OpenZFS has RAID-Z expansion committed:
Also committed to FreeBSD -HEAD/development:
- The bhyve hypervisor now supports TPM and GPU passthrough
Supernice.. I'm really looking forward to more separation between OS installs. similar to Qubes.
- cperciva (also submitter of this post) now head of the releng team
A. Huge thanks for all involved in FreeBSD.
It's amazing how polished, supported and performant it is for the relative size of the team involved.
B. Please consider donating.
C. I have much love for FreeBSD and as such, these are things I hope get address in the next major version (15.0)
- turning all internet facing services (except ssh) off, by default. OpenBSD does this.
- move all non-core things out of the base, like sendmail (now DMA, what a nice import from DFly btw)
- the base should only have one way to do things (don’t have 3 different firewalls in base like today)
- better defaults, https://vez.mrsk.me/freebsd-defaults.html
- something like io-uring, (async-sendfile is similar but that’s only for sendfile)
Thank you again for an amazing OS.
EDIT: I updated the first bullet of C for more clarity.
> - turning all services (except ssh) off, by default. OpenBSD does this.
I think people would be rightfully upset if syslogd, cron, and getty weren't started by default. moused and a mailer daemon I get not wanting to start. What else starts by default that you don't want?
> - the base should only have one way to do things (don’t have 3 different firewalls in base like today)
I dunno about ipf; but ipfw and pf don't have complete overlap --- I need to use both to run my network how I want to (pfsync has no equivalent in ipfw, ipfw pipe/queue/sched doesn't have an equivalent in pf)
Oh, look who has a fancy cable modem! Back in my day, we had to do it with 14.4 kbaud modems (really) after walking to school, uphill both ways in the snow (not really).
I would guess you had a 14.4 kbps modem operating on 9600 baud.
If you want to automate FreeBSD deployments on Hetzner Cloud you can try:
(Allows you to provision instances using either the hcloud utility/web uni with ssh key/user-data support)
They have had jails for 20+ years at this point. I would consider FreeBSD jails and containers on Linux equivalent at this point with overall the FreeBSD jails being better to manage if you do some scripting. The bhyve hypervisor, I haven't played with enough to form an opinion on.
Unless of course you're trying to sell to an enterprise who's core competency isn't computational, and has limited capacity to manage new operating systems in the fleet. Lots of big orgs can only support so many things. It seems odd until you realize they're not running your service. they're running thousands of services across multiple geographic regions with hundreds of thousands of corporate users. And all the upgrade paths that go with this whole thing, and the external facing integrations, etc, etc.
So, sure, if you've got a stand-alone B2C service that's making money today, enjoy your FreeBSD, SUSE, whatever. But if you're clients include big banks, chunks of governments, etc, think really hard about going off the reservation.
> I just couldn't do my work in FreeBSD
Jails and bHyve has been fantastic for my pipeline. Sure you don't get a fancy GUI like Vmware provides but the hypervisor is solid.
There's an industrial computer chip using ARM9 (aka: ARMv5 !!!!), let alone ARMv7.
This was released in the year 2020, for example, the latest Atmel SAM Microprocessor. While ARM9 / ARMv5 is abnormally out-of-date (lol Nintendo DS was ARMv6), its still getting new chips even today.
ARMv7, consisting of Cortex-A5, A7, and similar chips, is also similarly widespread today. I don't know how much FreeBSD support there is but I can think of multiple chips that have been made in the past 5 years that are still 32-bit ARMv7.
In an embedded world that still buys 8-bit computers, 32-bit is a luxury and 64-bit is just too much.
I'm only familiar with these chips from a Linux perspective however. But I have to imagine that some FreeBSD fanboi is hard at work porting FreeBSD to them!
EDIT: Lets see.... https://www.freebsd.org/platforms/arm/
Oh snap, Xilinx Zynq7 family. Yeah, that will do it. That's an extremely common chip (FPGA + ARMv7 / Cortex-A9).
> ... some FreeBSD fanboi is hard at work porting ...
> Also surprised they are cutting Power. That is one of the 4 platforms RHEL supports.
They're cutting 32-bit Powerpc. It looks like powerpc64le support remains in FreeBSD14.
Aside from 32-bit arm being used in more small embedded systems, I think it has 64-bit time_t. One of the reasons for killing of i386 is the Y2038 issue.
Ah, makes sense. That is a real issue for Linux and how distros will handle that. Besides cutting 32 bit userland support.
I have 2 ARMv7 boards sitting on my desk. They're still extremely common in industry.
Which ones? ... If you don't mind me asking? At least the microprocessor if you can't tell me the board :-)
Probably need to dig up those info, because I keep remembering they were on Dual Socket 64core Zen 3 with PCI-e 4 and DDR 4. The 128Core Zen4C with more memory channel and DDR5 should be able to push further.
Intel® Xeon® Platinum 8490H Processor
Total Cores 60
Total Threads 120
Max Turbo Frequency 3.50 GHz
Processor Base Frequency 1.90 GHz
AMD EPYC™ 9754
# of CPU Cores 128
# of Threads 256
Max. Boost Clock Up to 3.1GHz
All Core Boost Speed 3.1GHz
Base Clock 2.25GHz
1P / 2P
It's a lot easier to build such a machine if you have something to run on it, I suppose.
> Curious where this (rather large, yet still seemingly arbitrary) limit comes from.
It is Good Enough for now, while keeping various pre-allocated, statically created structures with-in reasonable size limits:
> Global and allocated arrays sized by MAXCPU result in excessive bloat on systems with lower core counts. In addition, some code used u_char (8 bits) to hold a CPU index, which is not valid if MAXCPU is greater than 256.
> A number of recent commits addressed these sorts of issues, including at least: […]
> The SMP system now supports up to 1024 cores on amd64 and arm64. Many kernel CPU sets are now dynamically allocated to avoid consuming excessive memory. The kernel cpuset ABI has been updated to support the higher limit. 76887e84be97 d1639e43c589 9051987e40c5 e0c6e8910898 (Sponsored by The FreeBSD Foundation)
Gotta have some limit, 4x the current limit of 256 seems reasonableish. Dual socket Epyc 9654 is 96 cores * 2 threads / core * 2 sockets = 384 threads. Intel says their Xeon Platinum 8490H can live on an 8 socket board, if you can find one (SuperMicro has one, no price listed ; not sure if this is really an 8 socket system, or if it's 4x dual-socket nodes in one chassis?); 60 * 2 * 8 = 960, so that's within the limit, and 8 socket boards are pretty difficult to find.
9754 is 128/256 now, so 256/512 for that.
That supermicro system is 8-way; it's 4 dual-socket motherboards but they're one system, hooked together by backplane boards. You can price supermicro's complete-system-only stuff (all of it now, alas) out on thinkmate or similar sites, but a minimal config (and you'd never buy that for a minimal config) hits around $60k.
Bitmaps for logical CPU cores and certain lock-free algorithms don't scale well to arbitrary high CPU counts e.g. reclaiming resources once in a while is O(n^2) or worse or the size of the lock structure is linear to the maximum number of cores etc.
The relevant parts of the ABI have been future proofed to allow raising the kernel CPU core count limit without breaking the syscall interface for systems with less cores than the existing limit.
I do this and the only thing that sucks is that network speed is limited. Between host and guest I only get about 1.5Gpbs on a Threadripper 1950X.
NIC pass-through should work though, I already got NVMe pass-through working, so if I had a spare PCIe slot I'd do that with a 10G adapter.
Having to use a VM for docker is no different than what MacOS users have to do and then at that point why not just use a Mac.
Why not use a Mac for my ZFS-based NAS? Is that your question?
Do you even need a VM? FreeBSD has linux binary compatibility.
Docker is a container based tech. the binary compatibility won't help.
I have a similar requirement and will be doing exactly this.
You could try SmartOS for native ZFS support + virtualization built in (including linux containers) + pain free upgrades. Not a BSD but shares a decent amount of values (and code). It's a candidate for my home server and so far I love it.
FreeBSD was another candidate but just skimming through the docs, what's easy on other systems looks painful there. Want to start a VM ? Here is a set of commands, different for every guest OS, with a bunch of unexplained options...
> If you're already tied to docker that's a great reason to stay with Linux.
I'd say the opposite. A great reason to run from Linux.
I'm not much of a docker fan but there are benefits to the docker ecosystem which you would lose completely by switching.
I'd caution you from pulling examples from other operating systems. I started dicking around with writing an interactive pf shell earlier this year (which somehow got me to where I am today writing an xpath parser in rust) and quickly learned that a.) the documentation is often pretty sparse especially for the API and b.) pf is all over the place (Solaris, MacOS, FreeBSD, OpenBSD, DragonFly, pfSense, etc.) but each version has some pretty significant differences.
Every single one (including pfSense) has their own variant. From what I can tell FreeBSD's taken bigger steps to sync up with OpenBSD than the rest, certainly bigger than pfSense.
Thanks, I used that Digital Ocean tutorial, but it doesn't get into the ICMP filtering enough, which you need for IPv6. And the pseudo code you shared is nice, but again, IPv6 will not work with that. ufw comes with a base ruleset with like 100ish lines of complex ICMP filtering. It's difficult/impossible to expect everyone to be able to write something like that from scratch in a syntax like pf. I just wish there was a complete, good template out there, but I haven't found anything.
ICMPv6 was designed to be a bit more robust than the v4 variant and I believe it is considered OK to allow all.
I have six WANs at work - four FTTC 80/20Mbs-1, a 1Gbs-1 leased line and a 1000/300Mbs-1 FTTP job. All have IPv6 apart from the FTTP. I only push through the leased line /48 to inside but I do experiments with the IPv6 on the others. I have a /56 IPv6 at home too, for at least 10 years.
I have allowed all ICMPv6 on two out of the four FTTC lines and not noticed much difference. The other two only allow "useful" ICMP, where useful is similar to this: http://firehol.org/guides/icmpv6-recommendations/
Let your external router do the hard work. Your web server on the inside should allow all ICMPv6 in both directions. Just because it has a globally routeable IPv6 address(es) it is not on the outside. Your webserver's firewall is a host one, not an external router one. There is a big difference. Your webserver might be configured to think about itself only and your router's firewalling might consider everything from a high level.
I don't think you need a complicated policy on your webserver but one that stops you accidentally exposing, say, a MariaDB/MySQL to the outside world because you bind it to all interfaces instead of just ::1.
Have you considered putting your conf on GitHub?
One of my most popular repos (which isn't saying a lot) is a single config file.
Let me think about it..it's really massive and has comments like "*uck that if scrub on" or "set aggressive -> emailservers try and try again" and those are the best understandable comments believe me.
All is mixed from highly reliable and fast connections to dial-up "industry" stuff.
However that would be a good motivation to clean that monster up...hmmm
Those are great but I don't see anything for a web server. Would just love a webserver that works with IPv6 and handles all the ICMP filtering like ufw does out of the box.
This is interesting. I was just playing around with a VM that had mirrored ZFS root and I wanted to test if I could boot from either mirror--it requires a bit of hand-holding since /etc/fstab seems to hard-code the gpt label of the boot device making it difficult to boot from the mirror without first changing that reference.
I didn't realize this and went down a rabbit hole of trying to recreate boot partitions which I found interesting.
With that said, I'm curious whether the upgrade procedure updates the bootcode on both drives in the mirror.
Slight tangent. The reason for playing around on the VM was that when I installed FreeBSD, I selected the zfs root auto-fs feature. It created a zfs partition that spanned the entire disk which I later decided that I didn't want. My goal was to shrink the zroot, and I was able to accomplish that using zfs send / receive and re-creating the partition withe the desired size. Fun exercise.
I need to revisit the installer to see if I'm able to provide any parameters to tune the size of the zroot. I really didn't want to create the filesystems manually from the installer shell.
>While iwlwifi supports all 802.11 a/b/g/n/ac/ax the compatibility code currently only supports 802.11 a/b/g modes. Support for 802.11 n/ac is to come. 802.11ax and 6Ghz support are planned.
I am reading it correctly that FreeBSD doesn't have 802.11n wifi support?
What exactly is the compatibility code?
I know. I was asking if it had been brought into the premade, mainline state.
How about zfs native encryption?
> There is a new zfskeys rc(8) service script, which allows for automatic decryption of ZFS datasets encrypted with ZFS native encryption during boot
To be clear, me taking over the release engineering team a few days before the release announcement was entirely coincidental timing.
Regarding the first bullet, thanks. I just updated my post for more clarity.
I meant internet facing services (e.g. not referring to cron, etc).
Well still, what's running out of the box other than a mail daemon (which I agree with you about), and maybe sshd? (I think it asks you during setup for that one, but I'm not sure anymore)
No one said anything about entrenched systems that already use Linux or other systems. FreeBSD is an excellent OS to use for those same jobs that a lot of people and companies are missing out on by not giving it a look. Netflix and Whatsapp can't be too wrong in their choice.
Netflix and Whatsapp can't be too wrong in their choice.
No one said anything about entrenched systems that already use Linux or other systems.
Evaluating something that's not-Linux is neat and cool, I get it. But the reality is switching will incur a heavy up front (and potentially long-term depending on the candidate pool) cost and lots of long-term paper cuts.
I was just thinking about this again and here's an example (well, two) of why I would not recommend FreeBSD for a prod environment: third party software.
The ports tree has been an archaic mess for as long as I've been dicking around with FreeBSD (~2.2.2). Processors and disks have gotten fast enough that relying on make(1) isn't quite as painful as it was, but ok. To ease the pain, FreeBSD started offering binary packages a while ago via pkg(8).
Earlier this year I was evaluating upgrading 12 -> 13, so I fired up VirtualBox and installed 13.whatever. Out of the box pkg did not work. It started its bootstrap song and dance and fell flat on its face. Digging around on the forums came up with a work around to get everything bootstrapped and a working version of pkg installed, but still this was a known problem that made it out the door.
Meanwhile even with a working version of pkg, the FreeBSD mirrors are glacially slow. I've consistently gotten about 2Mbps max from them. Typically I'll get a few hundred kilobytes per second even on an un-throttled 10 Gbps connection with an Intel 825xx NIC.
So on the 12.x machine I set up varnish and pointed the jails at that. And all was well with the world. Until a few weeks ago when pkg again fell flat on its face. Turns out that somewhere along the way pkg went from "use SRV(!) records but fall back on CNAME/A" to "fail if there are no SRV records". Shame on me for not configuring pkg to ignore SRV records in the first place, but what the hell kind of breaking change is that in the middle of a product's lifecycle? That's 100% the kind of thing that should be synced up with the release of new major version of FreeBSD (e.g. 14), but doesn't because pkg straddles and blurs the boundaries between base system and 3rd party tools.
So. Yeah. FreeBSD's great for academic purposes. It's great if you want to push the limits of what's capable and you have staff who are competent kernel hackers. It's great if you can't use GPL licensed products. But for most things? It's a distraction.
Hasn't 32bit linux used 64 bit time for a while now? Or at least had it available.
Kinda, apps need to be ported to it, or your C library needs to be sneaky about it.
If anyone has a newer article, feel free to share.
Yeah, we considered following Linux's lead by creating a new "i386 with 64-bit time_t" ABI but frankly nobody was interested in doing a lot of work for probably very little benefit.
I'm cheating a bit because it's 2 of the same board, they're Zynq 7010s
I wouldn't be surprised if the Zynq is the most common ARMv7 in practice...
Binary compatibility is the key part. You can ignore a lot of what cgroups and namespaces give you and containers will still "work" (but without the insecure sandbox). Without the binary, you're out of luck.
> Just because it has a globally routeable IPv6 address(es) it is not on the outside. Your webserver's firewall is a host one, not an external router one.
Interesting, so with my VPS on Vultr, there'd be an external firewall that takes care of the messier filtering?
Do it! :)
Thanks! But does that mean zfs native encryption is now supported in freebsd? Last i checked (a good while back) - I got the impression geli(?) was strongly preferred and zfs native encryption was generally discouraged?
Adding another anecdotal "works for me" data point.
I'm running it on 2 machines, encrypting certain datasets (no boot volumes).
Works for me :) I have per-dataset encryption and seems to work just fine.