menu

Posts Tagged #Linux

Posted on by Arnon Erba in How-To Guides

Ubuntu has been using update-motd as a MOTD (Message of the Day) generator for several years. Some of the default messages — such as the number of available security patches — can be helpful, but not everyone likes being greeted by a barrage of text every time they log in to their server. In this article, we’ll explore how to adjust, disable, or replace the dynamic MOTD in Ubuntu.

Before You Begin

If you’d rather work with update-motd than turn it off, detailed documentation for changing its output is available in the man page for update-motd. Essentially, the dynamic MOTD is generated by a collection of executable scripts found in the /etc/update-motd.d/ directory. These scripts can be updated, removed, or reordered, and new scripts can be added.

Disabling the Dynamic MOTD

While Ubuntu does not provide a straightforward way to remove update-motd, it’s possible to disable it by adjusting a few PAM options. Two lines, found in both /etc/pam.d/login and /etc/pam.d/sshd, cause update-motd to run on login:

session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate

Commenting out these lines in both files will prevent the pam_motd.so module from being loaded and will disable the dynamic MOTD.

Bonus Section: Enabling a Static MOTD

If you still want a message printed to the console on login, you can fall back to a static MOTD. Per the man page for sshd_config, OpenSSH can easily be configured to display a static MOTD:

PrintMotd
Specifies whether sshd should print /etc/motd when a user logs in interactively. (On some systems it is also printed by the shell, /etc/profile, or equivalent.) The default is “yes”.

Ubuntu disables this option by default and incorporates /etc/motd into its dynamic generator, but we can re-enable the option to make /etc/motd work again. Add or uncomment the following line in /etc/ssh/sshd_config and restart the OpenSSH daemon to have OpenSSH print /etc/motd on login:

PrintMotd yes

Sources

Posted on by Arnon Erba in News

On Monday, CentOS 7.6 (1810) became generally available for download. CentOS 7.6 follows the October release of Red Hat Enterprise Linux 7.6, as CentOS is the open source community-supported rebuild of Red Hat Enterprise Linux (RHEL).

A list of changes, deprecated features, and known issues can be found in the release notes for 7.6. Notably, the golang package is no longer included in the default CentOS repositories, and instead must be installed from the EPEL testing repository as discussed in the release notes.

You can trigger an upgrade to CentOS 7.6 in one step by running:

yum clean all && yum update

The upgrade requires a reboot to load the new kernel version. After upgrading, you can check your new distribution and kernel versions by running cat /etc/system-release and uname -r, respectively.

Posted on by Arnon Erba in News

A little over a month after the release of Red Hat Enterprise Linux 7.5, CentOS Linux 7.5 (1804) is now generally available. Releases of CentOS, the free Red Hat Enterprise Linux (RHEL) clone, usually lag behind the releases of its enterprise counterpart, but are identical as far as package selection and day-to-day use and administration are concerned. CentOS, like RHEL, is highly regarded for its stability and its enterprise-readiness, and it fills in the gap between the stable but license-restricted releases of RHEL and the fast-paced releases of Fedora.

CentOS 7.5 is available as an easy in-place upgrade for existing systems and brings an updated kernel and dozens of updated packages. Since 7.5 is a minor release, upgrading an existing system is as easy as:

yum clean all && yum update

After updating, you’ll want to reboot to take advantage of the new kernel and to restart any services that have been modified. If you’re provisioning a new system, it’s a great time to go grab some updated installation media. If you’re upgrading an existing system, you can always check your CentOS or RHEL release version with cat /etc/system-release and your kernel version with uname -r.

Posted on by Arnon Erba in News

Ubuntu 18.04 LTS (Bionic Beaver) is out today, after a short delay caused by a last-minute bug. The 18.04 release has been highly anticipated, since it is the first long-term-support (LTS) release since the switch back to GNOME as the default desktop environment for Ubuntu and the abandonment of Unity. Along with the glamour of a new desktop environment, 18.04 brings a new kernel, updated software packages, five years of support, and a multitude of other improvements. The new release also brings some controversy in the form of a new network manager. You can read the full release notes here, and you can grab Xubuntu 18.04, Kubuntu 18.04, Lubuntu 18.04, or any of the multitude of similarly updated Ubuntu variants.

The author has not tested 18.04 yet, but intends to do so as soon as he is finished messing about with CentOS and can be bothered to spin up a virtual machine.

Posted on by Arnon Erba in How-To Guides

For lack of better introduction, this post is about diagnosing and fixing the “Value Too Large for Defined Data Type” error in Postfix on Linux. A few weeks ago, shortly after deploying a new email server, I got a bounce notification with the following error:

Diagnostic-Code: X-Postfix; cannot update mailbox /var/spool/mail/username for user username. cannot open file: Value too large for defined data type

Checking the mail log revealed the following, more informative lines:

postfix/local[8354]: warning: this program was built for 32-bit file handles, but some number does not fit in 32 bits

postfix/local[8354]: warning: possible solution: recompile in 64-bit mode, or recompile in 32-bit mode with 'large file' support

postfix/local[8354]: E8FAA86251F: to=username, relay=local, delay=0.01, delays=0/0/0/0.01, dsn=5.2.0, status=bounced (cannot update mailbox /var/spool/mail/username for user username. cannot open file: Value too large for defined data type)

Diagnosing the Problem

This is not a pretty error, and it stems from the fact that 32-bit binary numbers have a fixed maximum value that they can hold. If you overflow this maximum value, bad things will happen (such as mail bouncing from your email server). In modern 64-bit programs, we don’t run into these constraints as much, because 64-bit binary numbers can hold much larger values than 32-bit ones. However, it turned out that the affected email server was running a 32-bit version of Postfix, which meant it was stuck with the limitations of 32-bit binary numbers.

When a program (e.g. Postfix) opens a file (e.g. a user’s inbox) in Linux, some information about the file is passed to the program. The program must store the information in some sort of data structure. In this case, one of the pieces of information is the size of the file, which the 32-bit version of Postfix stores in a 32-bit signed integer. The maximum value that can be stored in a 32-bit signed integer happens to be 2,147,483,647, which is 2 GB expressed in bytes. (Check out the last section of this post for how that number is derived). This means that the largest file size that can be “understood” by the 32-bit version of Postfix is 2 GB.

This works fine until Postfix encounters a file larger than 2 GB. For example, if you have a 3 GB file, the file size expressed in bytes is 3,221,225,472. It is not physically possible to express that value in the 32-bit signed integer that Postfix uses, so the file cannot be opened because its size cannot be understood. This is the root cause of the this program was built for 32-bit file handles error in the mail log.

The GNU Core Utilities FAQ has a paragraph that confirms what the “value too large for defined data type” means:

The message “Value too large for defined data type” is a system error message reported when an operation on a large file is attempted using a non-large file data type. Large files are defined as anything larger than a signed 32-bit integer, or stated differently, larger than 2GB.

Confirming the Issue

I verified that the issue was caused by a 32-bit version of Postfix with the following two steps:

  1. By running the file command on the main Postfix binary, which yielded the following output: ELF 32-bit LSB executable, Intel 80386.
  2. By verifying that the size of the mail spool file that caused the error was larger than 2 GB.

Additionally, the problem affected a few other users, all of whom had mail spool files larger than 2 GB.

Fixing Postfix

Unfortunately, there’s no configuration parameter that you can change or set that will allow a 32-bit installation of Postfix to overcome the innate 32-bit size limit for files. There is an option — mailbox_size_limit — that controls the maximum mailbox size that Postfix will allow, but that is unrelated to the file size limitation imposed by running a 32-bit application.

Given that the issue is caused by running a 32-bit version of Postfix, fixing the problem means recompiling a 64-bit version of Postfix or installing a 64-bit version directly. In my case, the 32-bit version had been installed by a piece of proprietary software, and I had to switch to a different installation of Postfix entirely.

Running file on the fresh installation of Postfix returned ELF 64-bit LSB shared object, x86-64.

Bonus Section: What Determines the Max Size of a 32-bit Signed Integer?

Traditional 32-bit programs are unable to open files larger than 2 GB due to a limitation imposed by the nature of 32-bit signed integers. For reference, a signed integer is a binary number that is stored using a method that indicates whether the number is positive or negative. The opposite of a signed integer is an unsigned integer. Unsigned integers contain no information about the sign of the number and therefore can only be positive.

There are a few different methods that can be used to store signed integers, but the most common method is two’s complement because of the various benefits it provides. Using two’s complement limits the maximum possible value of an integer, as we can see in the following example:

A 32-bit integer gives us 32 binary bits to use for storing data. If you store the number “zero” in a 32-bit unsigned integer, you would get something like this:

00000000 00000000 00000000 00000000

If we set all the binary digits to 1, we get the following number:

11111111 11111111 11111111 11111111

Converted to decimal, this number is 232-1, or 4,294,967,295. The value is 232-1 instead of 232 because binary numbers start at zero. The maximum number of values is 232, but the maximum value is 232-1. This happens to be the maximum size of an unsigned 32-bit integer.

However, we are concerned with the maximum value of a signed integer. If we’re using two’s complement to store the integer, the first bit is reserved to indicate the sign:

10000000 00000000 00000000 00000000

Because we lose the farthest bit to the left, the maximum value is reduced to 231-1, or 2,147,483,647.

Posted on by Arnon Erba in Server Logs Explained

(Editor’s note: This post has been updated since publication.)

A couple weeks ago, I covered what a WordPress brute-force attack looks like. However, you may have realized that trying an unlimited number of passwords is futile if you don’t know any valid usernames to guess passwords for. Fortunately for crackers, there’s a simple way to abuse the WordPress “pretty permalinks” feature to obtain valid usernames for a WordPress installation. Fortunately for us, there’s a simple way to block this with Nginx.

The Logs

Like a brute-force attack, a user enumeration attempt is usually pretty easy to spot. The logs usually start out like this:

203.0.113.42 - - [23/Jun/2016:17:04:11 -0700] "GET /?author=1 HTTP/1.1" 302 154 "-" "-"

And then continue like this…

203.0.113.42 - - [23/Jun/2016:17:04:12 -0700] "GET /?author=2 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:13 -0700] "GET /?author=3 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:15 -0700] "GET /?author=4 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:16 -0700] "GET /?author=5 HTTP/1.1" 302 154 "-" "-"

…until the cracker gives up.

203.0.113.42 - - [23/Jun/2016:17:04:18 -0700] "GET /?author=6 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:19 -0700] "GET /?author=7 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:20 -0700] "GET /?author=8 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:22 -0700] "GET /?author=9 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:23 -0700] "GET /?author=10 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:25 -0700] "GET /?author=11 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:26 -0700] "GET /?author=12 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:28 -0700] "GET /?author=13 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:29 -0700] "GET /?author=14 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:30 -0700] "GET /?author=15 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:32 -0700] "GET /?author=16 HTTP/1.1" 302 154 "-" "-"
203.0.113.42 - - [23/Jun/2016:17:04:33 -0700] "GET /?author=17 HTTP/1.1" 302 154 "-" "-"

Why This Works

By default, WordPress uses query strings as permalinks, such as:

http://example.com/?p=123

This example permalink would display a post with the ID number “123”. The post ID is generated when a new post is created. Query strings make for “ugly” permalinks, however, so WordPress allows you to enable “pretty permalinks” using Apache mod_rewrite or a custom try_files directive in Nginx. With “pretty permalinks” enabled, WordPress performs an HTTP 301 redirect from the “ugly” permalink to the “pretty permalink” configured on the Settings>Permalinks screen.

WordPress doesn’t just have IDs for posts, though. Every WordPress user, or author, has a unique ID that maps to their archive page, which is a list of all the posts that they have created. “Ugly” author permalinks look like:

http://example.com/?author=1

When pretty permalinks are enabled, the author archive page looks like:

http://example.com/author/username

This reveals the author’s WordPress username. A simple script can easily enumerate all the usernames on a WordPress site by trying ?author= with sequential numbers, as we saw in the log excerpts above.

Mitigating WordPress User Enumeration Attempts

We can block user enumeration on two levels: by redirecting the “ugly” permalinks, or by redirecting the /author/ pages entirely. Keep in mind that even if you disable the /author/ pages, your username can be discovered through other methods, and you should assume it is publicly available knowledge. However, we can make it difficult for the public to obtain that knowledge.

Disable query-string based user enumeration

A simple if statement works to disable user enumeration using query strings (or “ugly” permalinks). This is a “safe” if statement in Nginx (see the infamous If Is Evil page) since we are using it with a return statement.

if ($args ~ "^/?author=([0-9]*)") {
        return 302 $scheme://$server_name;
}

This code uses a simple regex, or regular expression, to match any URIs that end in /?author= plus a number. Here’s how it works:

$args is an Nginx variable for the query string
~ indicates that we want Nginx to perform a case-sensitive regex match using the regular expression inside the double quotation marks
^ (the carat) indicates the beginning of the path
/?author= is the fixed part of the path
([0-9]*) is a capturing group that matches any combination of numbers between 0 and 9

The return statement then redirects any URIs that fit the pattern.

Disable WordPress author pages entirely

We can add a simple location block to disable the author pages entirely. This solution is a bit redundant, because you would have to already know the author’s username to access their /author/ archive page, but this is useful if you don’t want author archive pages on your blog for some reason.

Note: this solution, by itself, does not prevent user enumeration, because the intermediary step between the query string and the author archive page pretty permalink will not be hidden. In other words, the query string will redirect to the archive page, revealing the username, and then will redirect based on the code below.

location ~ ^/author/(.*)$ {
        return 302 $scheme://$server_name;
}

~ starts a case-sensitive regex match, like above
^ starts the path we want to match
/author/ indicates we want paths beginning with /author/ to be matched
(.*) is a capturing group that matches any character except newlines
$ marks the end of the path

If a URI is matched, it is redirected to the root server name using return, like above.