menu

Category: General

The General category contains helpful reviews, tips and tricks, results of experiments, and other tidbits of information. Posts that aren’t specifically how-to guides or news articles will end up here along with informational posts that aren’t primarily opinion-based.

Updated Posted by Arnon Erba in General on .

Life as a sysadmin is constantly entertaining. Some days, even when you think you’ve accounted for every possible contingency, something happens that still manages to take you by surprise. Wednesday was one of those days.

I manage a few production Red Hat Enterprise Linux servers that, until Wednesday of this week, were all running RHEL 7. RHEL 7 is still well within its support window, but ever since RHEL 8 came out in May of last year I’ve been preparing to proactively upgrade my systems. By chance, I finished my preparations this week, so I scheduled an in-place rebuild of one of my less critical servers for the Wednesday the 29th.

The Upgrade Begins

Because true hardware-based RAID controllers are blisteringly expensive, I like to run simple software RAID arrays with mdadm where possible. The RHEL installer makes it easy to place all your partitions — even the EFI system partition — on an mdadm array during the installation process, so I started my server rebuild by creating a few RAID 1 arrays on the server’s dual HDDs. Later, with the installation complete, I rebooted the server and was greeted by a fresh RHEL 8 login prompt at the console.

Shortly after that, things went sideways. I ran yum update to pull down security patches and discovered a kernel/firmware update and a GRUB2 update. During the update process, I noticed that the server had slowed to a crawl, so I checked /proc/mdstat and realized that mdadm was still building the RAID 1 arrays and was eating up all the bandwidth my HDDs could muster while doing so. Impatient, and eager to get out of the loud server room and back to a desk, I decided to reboot the server to apply the kernel update so I could finish setting things up over SSH.

No Boot?

Two minutes later, I was staring at a frozen BIOS splash screen. As I’d just installed a new network card, I immediately suspected hardware problems, so I powered the server down and checked things over. Nothing helped: The hardware seemed fine, but it still wouldn’t boot.

Mdadm is pretty resilient, but since I’d shut the server down mid-verification I hastily assumed I’d somehow broken my RAID setup. Because I hadn’t gotten very far post-installation, I decided to wipe the server and reinstall RHEL 8 to rule out any issues. This time, I let mdadm sit for an hour or so before I touched anything, and then patched and rebooted the server again. Cue the frozen BIOS splash screen.

In hindsight, the common factor was clearly the updates, but as I’d just updated my RHEL 8 development server the day before with no ill effects I didn’t immediately consider a bad update as a possibility. Instead, I reset the BIOS to factory defaults and reviewed all my settings. When that didn’t help, I rummaged through my drawer of spare parts and grabbed an unused NVMe SSD to replace the server’s frustratingly slow HDDs in case they or the RAID configuration was the source of the problem. After installing RHEL 8 on the new drive, I rebooted the server several times to verify everything worked before applying updates. Once again, everything was fine until I applied the GRUB2 updates.

Verifying the Problem

Faced with what now seemed to be a bootloader issue, I went back to my RHEL 8 development server and updated it again. Sure enough, a new GRUB2 update popped up, and when I rebooted after applying it I got stuck at a black screen. Confident that I’d narrowed the issue down to a bugged update, I reinstalled RHEL 8 one last time on my production server — this time, skipping the update step — and set about reinstalling software on it.

When I finished later that night, I got the Red Hat daily digest email summarizing the latest RHEL updates. As it turned out, Red Hat had released patches for the BootHole vulnerability just a few hours before I arrived on-site in the afternoon ready to rebuild my server. (For reference, the RHEL 8 patch is RHSA-2020:3216 and the RHEL 7 patch is RHSA-2020:3217.) I quickly disabled automatic updates on the rest of my servers and wrote up a hasty bug report at 10:15pm.

The Results

I woke up on Thursday morning to 50+ email notifications from Bugzilla and a tweet from @nixcraft linking to the bug report. As the day went on, it became apparent that RHEL 7 was also affected and certain Ubuntu systems were suffering from the fallout of a similar patch.

As of the writing of this post, it seems like the specific issue lies with shim rather than GRUB2 itself. Right now, Red Hat is advising that people avoid the broken updates, and they’ve published various workarounds that may come in handy if you’ve already applied them. For the moment, I still have automatic updates disabled, and I’m hoping that Red Hat will publish fixed versions of GRUB2 and shim soon.

In the end, I spent four hours reinstalling RHEL 8, submitted a hastily written bug report, and became the anonymous “user” mentioned in the first paragraph of an Ars Technica article:

Early this morning, an urgent bug showed up at Red Hat’s bugzilla bug tracker—a user discovered that the RHSA_2020:3216 grub2 security update and RHSA-2020:3218 kernel security update rendered an RHEL 8.2 system unbootable.

Updated Posted by Arnon Erba in General on .

You’ve probably heard of RFC 2324, the iconic 1998 April Fool’s joke that gave the world the Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0):

Any attempt to brew coffee with a teapot should result in the error code “418 I’m a teapot”. The resulting entity body MAY be short and stout.

Some of the nerdier among us may even remember the IPv10 RFC draft, an elaborate piece of delusion or trolling still going strong after almost two years. Of course, we all know nothing helps reduce the number of competing standards like adding more competing standards [obligatory XKCD].

However, to locate true genius, we must peruse the list of April Fools’ Day RFCs and select one from April 1st, 1990. Yes, it’s none other than the one and only RFC 1149, aka IP over Avian Carriers (IPoAC). In perhaps the best form of proof that IP can be adapted to run over almost any physical link imaginable, RFC 1149 lays out the basics for a working IP-based network using carrier pigeons.

Really, no one can describe IPoAC better than its creator, David Waitzman:

The IP datagram is printed, on a small scroll of paper, in hexadecimal, with each octet separated by whitestuff and blackstuff. The scroll of paper is wrapped around one leg of the avian carrier. A band of duct tape is used to secure the datagram’s edges. The bandwidth is limited to the leg length.

If you haven’t read all of RFC 1149, it’s only two pages and is certainly worth the read. When you’re finished, you can read RFC 2549, David’s quality of service-enabled extension to the original IPoAC spec. I’ll leave you with this absolute gem from that follow-up RFC:

The ITU has offered . . . formal alignment with its corresponding technology, Penguins, but that won’t fly.

All jokes aside, this is a good reminder that anyone can submit their own RFC, and that you probably shouldn’t believe everything you read on the Internet.

Updated Posted by Arnon Erba in General on .

One of the hardest and most contentious steps of building a computer is applying thermal paste. While most everything else is as simple as snapping connectors together, putting in screws, and hoping that your graphics card isn’t too big for your case, it’s hard to be sure that the cooling system of your PC is operating at maximum efficiency. Installation notwithstanding, I’ve always wondered two things about thermal paste:

  1. If reapplying old thermal paste after a few years is a good idea
  2. If third-party thermal compounds are better than the ones that come with pre-built PCs

With an old desktop sitting around my house and a few spare hours, I decided to answer both questions by setting up some tests and replacing the stock thermal paste.

Setting Up the Test

I tried to make the test as scientific as possible with the hope of getting a clear answer. The PC in question was purchased in 2012, so it’s a good example of an older machine that’s seen regular home use over the course of its life. Before starting the test, I had to clean out a vast amount of dust that had accumulated inside it over the years.

Computer: Lenovo H520s small form factor desktop
CPU: Intel Core i5-2320 at 3.00 GHz, turbo boost up to 3.30 GHz
Software: Speccy for temperature data and IntelBurnTest for load testing
Thermal Paste: Arctic Silver Céramique 2
Ambient Temperature: 19° C (66° F)

Speccy is certainly not the only piece of software that can be used to record CPU temperatures, but I’ve found it to be reasonably accurate in the past. Besides, the temperatures by themselves are not that important — I’m mainly interested in the difference in temperatures before and after replacing the thermal paste.

Baseline Tests

To establish a baseline, I tested the PC at a warm idle and then under load. To make sure the computer was sufficiently warm, I ran a few passes with MemTest86+. Memtest86+ doesn’t put much load on the CPU, but I just wanted to get the computer doing something so it didn’t have the unfair advantage of a cold start. After letting it run for a few hours, I rebooted directly into Windows and waited until the CPU was almost completely idle before recording the temperatures.

When comparing idle versus load temperatures, it’s important to keep in mind that the i5-2320 downclocks and undervolts itself when idle to save power. At idle, downclocked to 1.6 GHz on all four cores, I measured the average CPU temp as well as the general spread of temperatures per core.

Average CPU temperature at idle: 30° C
Temperature spread at idle: 28° C to 32° C

To establish a baseline at load, I ran IntelBurnTest on “high” (2048 MB of RAM) for 5 passes and recorded the maximum temperature from the final pass.

Average CPU temperature at load: 70° C
Temperature spread at load: 68° C to 71° C

It’s worth noting that this particular motherboard seems to only increase the CPU fan speed once the CPU reaches 70° C. I was surprised at how warm the chip had to get before the fan speed increased, but once it reached 70 degrees the temperatures seemed to stabilize.

Replacing the Thermal Paste

There’s a number of different techniques that can be used to apply thermal paste. Some people suggest drawing a line across the CPU, while others suggest spreading the thermal paste out before installing the heatsink. I’m not a fan of either of those approaches, since the line can easily be squeezed over the edge of the CPU and air bubbles can be introduced into the paste by spreading it. Instead, I opted to use the tried-and-true “single dot in the center of the CPU” method.

First, however, I wanted to see what the original thermal paste looked like. Here’s what I saw when I detached the stock heatsink:

It’s not bad. The paste is spread out evenly and isn’t too thick. It also wasn’t dry in the slightest, even though some Internet forums claim that old paste dries up over time.

The paste was also spread evenly on the stock aluminum heatsink, as I expected:

With my curiosity satisfied, I cleaned the CPU and heatsink with isopropyl alcohol (92% concentration, the highest I can usually find in stores). I put on slightly more thermal paste than I generally do, but Arctic Silver Céramique 2 is advertised as non-conductive in the off chance that it does get somewhere it shouldn’t.

With the new thermal paste on, I re-installed the stock heatsink and pieced the computer back together. Careful inspection down the side of the heatsink revealed that the replacement paste had just barely reached the edge of the CPU, indicating that it had fully covered the CPU lid as intended.

Results

Arctic Silver’s site claims that Céramique 2 is capable of dropping CPU temperatures by “2 to 10 degrees centigrade”. They also claim that due to the nature of the paste it takes “a minimum of 25 hours and several thermal cycles” for it to reach maximum cooling efficiency. Unfortunately, I didn’t have 25 hours to wait for the paste to fully cure, so I ran the CPU through a couple thermal cycles and called it good. Feel free to discount my results because of this, but keep in mind I have tested the break-in period before and I don’t think it makes much of a difference.

I performed the same tests as with the stock paste, so see the “Baseline” section for the methods I used. Without further ado, here’s the results I measured at idle after replacing the stock thermal paste:

Average CPU temperature at idle: 25° C
Temperature spread at idle: 24° C to 27° C

And the results at load:

Average CPU temperature at load: 70° C
Temperature spread at load: 68° C to 72° C

To my surprise, the idle temperature dropped by fully 5° C, at least according to my measurements. However, the load temperature didn’t change at all.

Five degrees at idle could be chalked up to measurement error, but I’d like to think that replacing the paste had some effect. On the other hand, I have a plausible but disappointing explanation for why the load temperatures didn’t change: the fan speed on this particular desktop is dynamic and seems to only increase when the CPU hits 70° C. Without a way to constrain the fan to a certain speed, it did its job and kept the CPU from going too far over 70° C. For what it’s worth, the points at which the fan speed increased seemed identical before and after replacing the thermal paste, indicating that the CPU wasn’t heating up much faster either way.

In short: was it worth it? Not to me, as it took several hours and a lot of effort and resources for minimal gain. It seems like, at least on this particular computer, there’s little improvement to be gained by replacing the stock thermal paste.

Updated Posted by Arnon Erba in General on .

Pop quiz: do you use the cloud? Even if you don’t know it, it’s highly likely that your answer is “yes”. Cloud computing has become a ubiquitous part of modern day computer usage. However, many people don’t know that much about it.

Google defines “cloud computing” as the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. That definition is still fairly technical, so let’s break it down.

When you edit a file locally, the file is stored and processed on your computer. This works fairly well, assuming you only have one computer and don’t need to access your file from anywhere else or share it with collaborators. However, if your computer is turned off, you can’t use the file without making a copy of it and placing it on another computer. In today’s world of smartphones and mobile devices, it’s crucial to have access to the same data from multiple locations without having to create redundant copies of files and deal with the hassle of moving them back and forth. The solution is to store the files in a separate, universal location and access those files across the Internet. This separate location takes the form of large, powerful computers run by companies such as Google, Microsoft, Apple, and Amazon, and is commonly referred to as the cloud.

A good example of the cloud in everyday life is modern email. If you use email on both your phone and your computer, and your inbox contains the same emails no matter what device you’re on, you’re most likely using the cloud. The standard configuration for Gmail, Yahoo! Mail, or other email accounts is to store all your emails on your email provider’s servers and to have your devices download temporary copies of them to view. In this example, all your email is stored in the cloud.

Another commonly used cloud service is Google Drive. Google Drive is a service that allows users to upload, edit, and share documents, pictures, and videos. When you use Google Drive, all your files are stored on Google’s cloud servers and are accessible when you sign in to Google Drive with your password.

iCloud on your iPhone or iPad is also a cloud service. iCloud allows you to store photos, backups, and other settings in the cloud so that they are accessible on all your Apple devices. If you use iCloud, you’re using Apple’s cloud servers to store your data.

Other examples include Pandora, Google Play Music, Dropbox, Microsoft Office 365, YouTube, and almost any other service that involves streaming, downloading, or storing content on the Internet.

The name “cloud computing” has nothing to do with the weather, as the term stems from the abstract depiction of remote servers or the Internet in general as a large, ambiguous cloud. However, that doesn’t mean that weather has no effect on the cloud. Since the cloud relies on massive physical computers to store data, a large storm or natural disaster could physically affect these servers. In 2012, Hurricane Sandy partially flooded the server farm of a company called Datagram, Inc. Datagram’s servers ran a number of popular websites, such as Lifehacker, Gizmodo, and Huffington Post, and these websites temporarily went offline as a result of the storm.

Updated Posted by Arnon Erba in General on .

If you’re ready to move up from Notepad for editing code, give Brackets a try. It’s completely free and is offered by Adobe developers. It provides a minimal but useful environment and a beautiful interface, and is designed to integrate with Adobe Extract. If you’d prefer the non-Extract integrated version, you can grab that from the Brackets site as well.

brackets