Gears & Gadgets

The Ars Technica System Guide, Winter 2019: The one about the servers

The Ars Technica System Guide, Winter 2019: The one about the servers
Aurich Lawson

In the last Ars System Guide roughly one year ago, we took a slight detour from our long-running series. Rather than recommending the latest components focused on a particular niche like gaming or home entertainment PCs, we broadened our scope and focused on ideology rather than instruction and outlined what to look for when building a great desktop PC.

This time around, we’re playing the hits again. The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we’re going to focus specifically on building some servers.

Naturally, this raises a particular question: “What’s a server for, then?” Let’s broach a bit of theory before leaving plenty of room for the actual builds.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

The difference between desktops and servers

A desktop PC’s goal is to keep a human who’s sitting in front of it and pounding away on the keyboard and mouse happy. This forces the desktop PC to be a generalist—it’s got to be pretty good at everything—and, at the same time, necessarily shifts its focus away from reliability and maintainability. (We do not expect end users to service redundant disk arrays—or much of anything in terms of skilled maintenance, for that matter.)

A server, on the other hand, tends to have a more tightly focused job. The most common servers are, for the most part, storage servers: they keep collections of simple, “flat” files available for lots of people and their desktops to access. (This line gets blurry once the cloud comes into play; most Web-enabled services are a tightly-integrated blend of storage, database, and application services.)

Although there are servers that don’t focus much on their own storage—such as dedicated application servers and hypervisors with filesystems served over iSCSI or NFS from other, equally dedicated storage-only servers—that’s not what we’re going to build. We want more general-purpose servers that can stand on their own and do a good job with most server-type workloads. They’ll need really good storage hardware and filesystems to reliably and rapidly store and retrieve data; decent CPUs to avoid bogging down on the Web or database applications they might need to run; and plenty of RAM to cache the filesystems and avoid loading up the actual disks any more than necessary.

If you’ve got an old but reasonably powerful desktop machine, you shouldn’t let its lack of ECC RAM keep you from recycling it as a small server. But we’re building a new server, so we’re going to draw a line in the sand and say that it has to use ECC. ECC memory helps prevent data from being corrupted and programs from crashing; it’s a little harder to find and a little more expensive than desktop memory, but not by a whole lot. In my opinion, it’s kinda criminal that every modern PC isn’t designed to use ECC RAM. Unfortunately, if designing systems without ECC is a crime, our entire consumer computing industry is a big pack of criminals.

The difference between a server and a NAS

A Network Attached Storage appliance—or NAS—looks a lot like a server at first blush. It’s a very specialized device designed to allow end users to stuff it full of a bunch of physical disks and have a specialized onboard operating system automatically find them, configure them, and dump them into a (hopefully) redundant array with little to no sysadmin oversight required. A typical NAS doesn’t and can’t serve user applications or databases; it’s only intended to store simple, flat files with as little muss and fuss as possible.

NAS devices are also typically underwhelming in performance. They’re built to a very narrow specification that favors anemic CPUs and as little RAM as possible, which means it’s a cinch to make them fall flat when presented with challenging workloads that a beefier, more general-purpose server might handle with ease. Their tight focus on ease-of-use and lack of maintenance also presents a double-edged sword that can be intensely frustrating to more technical folks, since they’re typically sharply limited in configurability.

What our server builds are meant for

All three of the builds we’re going to show you are general-purpose x86-64 builds. You won’t need specialized operating systems to run them, and you won’t be limited in what you can or can’t do with them. If you’re mostly focused on storing the family’s files or backups, you might choose a storage-oriented distribution like FreeNAS or NAS4Free, both of which offer robust, uber-reliable ZFS filesystems with capable, built-in Web administration interfaces. If you want real flexibility, you could instead focus on virtualization—either using a specialized distro like Proxmox, or starting from the ground up with a general-purpose Linux distro like Ubuntu.

(Virtualization traditionalists might begin with ESXi, XenServer, or even Windows 10 with HyperV, but I don’t personally recommend it—starting out that way means giving up on ZFS storage.)

You could also go really, really old-school and just install the operating system of your choice directly on the bare metal and compute like it’s 1999. But if you avoid advanced storage and modern virtualization both, you’re wasting the potential of what your server can actually do… and making a lot more work (and a lot less maintainability) for yourself in the long run.

Let’s block ads! (Why?)

Tech – Ars Technica

Leave a Reply

Your email address will not be published. Required fields are marked *