menu

Posts Tagged #Linux

Posted on by Arnon Erba in News

On Monday, CentOS 7.6 (1810) became generally available for download. CentOS 7.6 follows the October release of Red Hat Enterprise Linux 7.6, as CentOS is the open source community-supported rebuild of Red Hat Enterprise Linux (RHEL).

A list of changes, deprecated features, and known issues can be found in the release notes for 7.6. Notably, the golang package is no longer included in the default CentOS repositories, and instead must be installed from the EPEL testing repository as discussed in the release notes.

You can trigger an upgrade to CentOS 7.6 in one step by running:

yum clean all && yum update

The upgrade requires a reboot to load the new kernel version. After upgrading, you can check your new distribution and kernel versions by running cat /etc/system-release and uname -r, respectively.

Posted on by Arnon Erba in News

A little over a month after the release of Red Hat Enterprise Linux 7.5, CentOS Linux 7.5 (1804) is now generally available. Releases of CentOS, the free Red Hat Enterprise Linux (RHEL) clone, usually lag behind the releases of its enterprise counterpart, but are identical as far as package selection and day-to-day use and administration are concerned. CentOS, like RHEL, is highly regarded for its stability and its enterprise-readiness, and it fills in the gap between the stable but license-restricted releases of RHEL and the fast-paced releases of Fedora.

CentOS 7.5 is available as an easy in-place upgrade for existing systems and brings an updated kernel and dozens of updated packages. Since 7.5 is a minor release, upgrading an existing system is as easy as:

yum clean all && yum update

After updating, you’ll want to reboot to take advantage of the new kernel and to restart any services that have been modified. If you’re provisioning a new system, it’s a great time to go grab some updated installation media. If you’re upgrading an existing system, you can always check your CentOS or RHEL release version with cat /etc/system-release and your kernel version with uname -r.

Posted on by Arnon Erba in News

Ubuntu 18.04 LTS (Bionic Beaver) is out today, after a short delay caused by a last-minute bug. The 18.04 release has been highly anticipated, since it is the first long-term-support (LTS) release since the switch back to GNOME as the default desktop environment for Ubuntu and the abandonment of Unity. Along with the glamour of a new desktop environment, 18.04 brings a new kernel, updated software packages, five years of support, and a multitude of other improvements. The new release also brings some controversy in the form of a new network manager. You can read the full release notes here, and you can grab Xubuntu 18.04, Kubuntu 18.04, Lubuntu 18.04, or any of the multitude of similarly updated Ubuntu variants.

The author has not tested 18.04 yet, but intends to do so as soon as he is finished messing about with CentOS and can be bothered to spin up a virtual machine.

Posted on by Arnon Erba in How-To Guides

For lack of better introduction, this post is about diagnosing and fixing the “Value Too Large for Defined Data Type” error in Postfix on Linux. A few weeks ago, shortly after deploying a new email server, I got a bounce notification with the following error:

Diagnostic-Code: X-Postfix; cannot update mailbox /var/spool/mail/username for user username. cannot open file: Value too large for defined data type

Checking the mail log revealed the following, more informative lines:

postfix/local[8354]: warning: this program was built for 32-bit file handles, but some number does not fit in 32 bits

postfix/local[8354]: warning: possible solution: recompile in 64-bit mode, or recompile in 32-bit mode with 'large file' support

postfix/local[8354]: E8FAA86251F: to=username, relay=local, delay=0.01, delays=0/0/0/0.01, dsn=5.2.0, status=bounced (cannot update mailbox /var/spool/mail/username for user username. cannot open file: Value too large for defined data type)

Diagnosing the Problem

This is not a pretty error, and it stems from the fact that 32-bit binary numbers have a fixed maximum value that they can hold. If you overflow this maximum value, bad things will happen (such as mail bouncing from your email server). In modern 64-bit programs, we don’t run into these constraints as much, because 64-bit binary numbers can hold much larger values than 32-bit ones. However, it turned out that the affected email server was running a 32-bit version of Postfix, which meant it was stuck with the limitations of 32-bit binary numbers.

When a program (e.g. Postfix) opens a file (e.g. a user’s inbox) in Linux, some information about the file is passed to the program. The program must store the information in some sort of data structure. In this case, one of the pieces of information is the size of the file, which the 32-bit version of Postfix stores in a 32-bit signed integer. The maximum value that can be stored in a 32-bit signed integer happens to be 2,147,483,647, which is 2 GB expressed in bytes. (Check out the last section of this post for how that number is derived). This means that the largest file size that can be “understood” by the 32-bit version of Postfix is 2 GB.

This works fine until Postfix encounters a file larger than 2 GB. For example, if you have a 3 GB file, the file size expressed in bytes is 3,221,225,472. It is not physically possible to express that value in the 32-bit signed integer that Postfix uses, so the file cannot be opened because its size cannot be understood. This is the root cause of the this program was built for 32-bit file handles error in the mail log.

The GNU Core Utilities FAQ has a paragraph that confirms what the “value too large for defined data type” means:

The message “Value too large for defined data type” is a system error message reported when an operation on a large file is attempted using a non-large file data type. Large files are defined as anything larger than a signed 32-bit integer, or stated differently, larger than 2GB.

Confirming the Issue

I verified that the issue was caused by a 32-bit version of Postfix with the following two steps:

  1. By running the file command on the main Postfix binary, which yielded the following output: ELF 32-bit LSB executable, Intel 80386.
  2. By verifying that the size of the mail spool file that caused the error was larger than 2 GB.

Additionally, the problem affected a few other users, all of whom had mail spool files larger than 2 GB.

Fixing Postfix

Unfortunately, there’s no configuration parameter that you can change or set that will allow a 32-bit installation of Postfix to overcome the innate 32-bit size limit for files. There is an option — mailbox_size_limit — that controls the maximum mailbox size that Postfix will allow, but that is unrelated to the file size limitation imposed by running a 32-bit application.

Given that the issue is caused by running a 32-bit version of Postfix, fixing the problem means recompiling a 64-bit version of Postfix or installing a 64-bit version directly. In my case, the 32-bit version had been installed by a piece of proprietary software, and I had to switch to a different installation of Postfix entirely.

Running file on the fresh installation of Postfix returned ELF 64-bit LSB shared object, x86-64.

Bonus Section: What Determines the Max Size of a 32-bit Signed Integer?

Traditional 32-bit programs are unable to open files larger than 2 GB due to a limitation imposed by the nature of 32-bit signed integers. For reference, a signed integer is a binary number that is stored using a method that indicates whether the number is positive or negative. The opposite of a signed integer is an unsigned integer. Unsigned integers contain no information about the sign of the number and therefore can only be positive.

There are a few different methods that can be used to store signed integers, but the most common method is two’s complement because of the various benefits it provides. Using two’s complement limits the maximum possible value of an integer, as we can see in the following example:

A 32-bit integer gives us 32 binary bits to use for storing data. If you store the number “zero” in a 32-bit unsigned integer, you would get something like this:

00000000 00000000 00000000 00000000

If we set all the binary digits to 1, we get the following number:

11111111 11111111 11111111 11111111

Converted to decimal, this number is 232-1, or 4,294,967,295. The value is 232-1 instead of 232 because binary numbers start at zero. The maximum number of values is 232, but the maximum value is 232-1. This happens to be the maximum size of an unsigned 32-bit integer.

However, we are concerned with the maximum value of a signed integer. If we’re using two’s complement to store the integer, the first bit is reserved to indicate the sign:

10000000 00000000 00000000 00000000

Because we lose the farthest bit to the left, the maximum value is reduced to 231-1, or 2,147,483,647.

Posted on by Arnon Erba in Server Logs Explained

(Editor’s note: This post has been updated since publication.)

A couple weeks ago, I covered what a WordPress brute-force attack looks like. However, you may have realized that trying an unlimited number of passwords is futile if you don’t know any valid usernames to guess passwords for. Fortunately for crackers, there’s a simple way to abuse the WordPress “pretty permalinks” feature to obtain valid usernames for a WordPress installation. Fortunately for us, there’s a simple way to block this with Nginx.

The Logs

Like a brute-force attack, a user enumeration attempt is usually pretty easy to spot. The logs usually start out like this:

203.0.113.42 - - [23/Jun/2016:17:04:11 -0700] "GET /?author=1 HTTP/1.1" 302 154 "-" "-"

And then continue like this…

203.0.113.42 - - [23/Jun/2016:17:04:12 -0700] "GET /?author=2 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:13 -0700] "GET /?author=3 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:15 -0700] "GET /?author=4 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:16 -0700] "GET /?author=5 HTTP/1.1" 302 154 "-" "-"

…until the cracker gives up.

203.0.113.42 - - [23/Jun/2016:17:04:18 -0700] "GET /?author=6 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:19 -0700] "GET /?author=7 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:20 -0700] "GET /?author=8 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:22 -0700] "GET /?author=9 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:23 -0700] "GET /?author=10 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:25 -0700] "GET /?author=11 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:26 -0700] "GET /?author=12 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:28 -0700] "GET /?author=13 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:29 -0700] "GET /?author=14 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:30 -0700] "GET /?author=15 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:32 -0700] "GET /?author=16 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:33 -0700] "GET /?author=17 HTTP/1.1" 302 154 "-" "-"

Why This Works

By default, WordPress uses query strings as permalinks, such as:

http://example.com/?p=123

This example permalink would display a post with the ID number “123”. The post ID is generated when a new post is created. Query strings make for “ugly” permalinks, however, so WordPress allows you to enable “pretty permalinks” using Apache mod_rewrite or a custom try_files directive in Nginx. With “pretty permalinks” enabled, WordPress performs an HTTP 301 redirect from the “ugly” permalink to the “pretty permalink” configured on the Settings>Permalinks screen.

WordPress doesn’t just have IDs for posts, though. Every WordPress user, or author, has a unique ID that maps to their archive page, which is a list of all the posts that they have created. “Ugly” author permalinks look like:

http://example.com/?author=1

When pretty permalinks are enabled, the author archive page looks like:

http://example.com/author/username

This reveals the author’s WordPress username. A simple script can easily enumerate all the usernames on a WordPress site by trying ?author= with sequential numbers, as we saw in the log excerpts above.

Mitigating WordPress User Enumeration Attempts

We can block user enumeration on two levels: by redirecting the “ugly” permalinks, or by redirecting the /author/ pages entirely. Keep in mind that even if you disable the /author/ pages, your username can be discovered through other methods, and you should assume it is publicly available knowledge. However, we can make it difficult for the public to obtain that knowledge.

Disable query-string based user enumeration

A simple if statement works to disable user enumeration using query strings (or “ugly” permalinks). This is a “safe” if statement in Nginx (see the infamous If Is Evil page) since we are using it with a return statement.

if ($args ~ "^/?author=([0-9]*)") {
        return 302 $scheme://$server_name;
}

This code uses a simple regex, or regular expression, to match any URIs that end in /?author= plus a number. Here’s how it works:

$args is an Nginx variable for the query string
~ indicates that we want Nginx to perform a case-sensitive regex match using the regular expression inside the double quotation marks
^ (the carat) indicates the beginning of the path
/?author= is the fixed part of the path
([0-9]*) is a capturing group that matches any combination of numbers between 0 and 9

The return statement then redirects any URIs that fit the pattern.

Disable WordPress author pages entirely

We can add a simple location block to disable the author pages entirely. This solution is a bit redundant, because you would have to already know the author’s username to access their /author/ archive page, but this is useful if you don’t want author archive pages on your blog for some reason.

Note: this solution, by itself, does not prevent user enumeration, because the intermediary step between the query string and the author archive page pretty permalink will not be hidden. In other words, the query string will redirect to the archive page, revealing the username, and then will redirect based on the code below.

location ~ ^/author/(.*)$ {
        return 302 $scheme://$server_name;
}

~ starts a case-sensitive regex match, like above
^ starts the path we want to match
/author/ indicates we want paths beginning with /author/ to be matched
(.*) is a capturing group that matches any character except newlines
$ marks the end of the path

If a URI is matched, it is redirected to the root server name using return, like above.

Posted on by Arnon Erba in Op-Ed

A couple of days ago, an article titled “Here’s another really great reason to never touch Linux” rolled through my Apple News feed. I clicked on it, expecting to see a breakdown of some massive vulnerability or maybe just a good rant about someone not being able to find drivers for their AMD graphics card. However, as I started reading, I found myself in the middle of one of the most dangerously misleading clickbait articles I’ve read this entire year.

If you haven’t seen the article I would encourage you to go read it, if only for context, and then come back here to see what makes it so dangerously inaccurate.

The author starts off with a cheap shot at Linus Torvalds, dismisses the massive Linux community as “geeks”, and then reveals that the article is actually about the recent hack of Canonical’s highly popular Ubuntu Forums, which really has nothing to do with Linux at all. The Ubuntu Forums are a community-driven space to discuss Ubuntu, a popular distribution of Linux. I’m not sure whether or not the author understands the difference between Ubuntu, a distribution, and Linux, the open-source operating system started by Linus Torvalds in 1991, but the distinction is important. Linux is different from MacOS or Windows in that anyone can use the Linux kernel and build their own distribution. A distribution is just a collection of different bits – software utilities, graphical user interfaces, and more – that add user functionality. Ubuntu is just one popular, open-source distribution.

Regardless, the forums hack has nothing to do with Ubuntu, much less Linux as a whole, aside from the fact that the forums are for discussing Ubuntu. In fact, the security flaw that allowed a hacker to access the Ubuntu Forums database exists in an old version of a piece of software called Forumrunner, an add-on that powers parts of the forums. Just to be clear: absolutely nothing related to the Linux kernel or any part of the Linux OS was breached or exploited in any way. Canonical explains this in a blog post. The hack of the Ubuntu Forums is solely a commentary on Canonical’s security practices, and Canonical is just a company and is in no way in charge of the Linux project.

Despite these facts, the author of the above mentioned article seems to consider the hack of an isolated Linux forum to be some valid reason not to use Linux. That’s a bit like saying the LinkedIn password leak is “another really great reason not to have a job”.

There are other inaccuracies and irrelevant comments in the article as well, but mainly it’s just downright misleading. As a system administrator and a long-time Linux user, it’s easy to spot the article as being clickbait, but someone unfamiliar with Linux could easily get the wrong impression. If you’re going to scare people away from Linux, at least write about graphics drivers.