menu

Category: How-To Guides

The How-To Guides category is for posts that explain how to fix an issue, configure a device or a piece of software, or make something work.

Posted by Arnon Erba in How-To Guides on .

SELinux has a well-earned reputation for being hard to use. It’s infamous for causing strange, illogical faults that can’t be fixed via normal troubleshooting routines, and, as a consequence, many guides and blog posts recommend disabling it outright. However, SELinux is a great way to secure and harden Linux systems, and with a few simple steps it’s possible to fix most common problems you might encounter while using it.

Examples of Common Issues

Let’s start by looking at a few issues I’ve had in the past that turned out to be caused by SELinux:

  1. A user could no longer log in with an SSH key after their home directory was restored from a backup. Their authorized_keys file was configured correctly but was being ignored by SSH.
  2. A service wouldn’t start after replacing its config file with a modified version that had been uploaded via SFTP. The service complained about the config file being inaccessible even though its permissions were set correctly.
  3. Postfix couldn’t communicate with OpenDKIM when the latter was set to use a UNIX socket instead of a TCP/IP socket. The Postfix user was in the correct security group and the socket was configured correctly.

Without a general understanding of how SELinux works, you might guess that the issues above were caused by bad file permissions. That’s why it’s important to understand SELinux and to identify it as a possible culprit as early as possible in the troubleshooting process.

What is SELinux, Exactly?

At its core, SELinux is a set of rules that tell applications what they can and can’t do. SELinux is separate from the regular Linux file permissions model and is therefore able to protect against issues like misconfigured permissions or privilege escalation exploits. In order for an operation to succeed on an SELinux-enabled system, it must be permitted by file permissions as well as by the active SELinux policy.

Regular file permissions are a form of discretionary access control, or DAC. On the other hand, SELinux is a form of mandatory access control, or MAC. With DAC, a user or service can do anything they have permission to do, even if it’s something undesirable or dangerous. With MAC, malicious or dangerous actions can be stopped, even if a DAC policy would otherwise permit them to happen.

Here’s an example of why you’d want to keep SELinux enabled. Normally, Apache shouldn’t be able to read /etc/shadow, and the default file permissions prevent that from happening. However, if those permissions were misconfigured and Apache was configured to serve files from /etc, it would be possible for anyone with a web browser to download /etc/shadow. A properly configured SELinux policy would override both misconfigurations and prevent Apache from serving sensitive system files from /etc.

Putting Things in Context

Extra protection is great, but what happens when SELinux interferes when it shouldn’t? If SELinux is interfering with something “normal” that should otherwise work, chances are you have one simple problem: incorrect file security contexts. Security contexts are how SELinux categorizes files and decides which applications can access them. By default, security contexts are applied to files based on their location. For example, files in home directories get different security contexts from files in /etc or /tmp.

You can inspect a file’s security context with ls -Z, but you’re probably better off using restorecon to reset contexts to their default values if you suspect a problem. To save time, you can run restorecon -rv /path/to/directory to recursively reset the security contexts for an entire directory. If things are bad enough, you can relabel your entire filesystem by running touch /.autorelabel and then rebooting.

The restorecon command was the solution to problems #1 and #2 from the list at the beginning of this post. Incorrect security contexts can be applied when files are restored from a backup or copied from a nonstandard location.

Adjusting the Policy

In most mainstream Linux distributions, the default SELinux policy is carefully crafted by a group of upstream maintainers. Creating a perfect one-size-fits-all policy is impossible, so the maintainers provide built-in policy exceptions in the form of SELinux booleans. SELinux booleans can be easily enabled or disabled to cover common use cases where the default SELinux policy falls short. If you have an SELinux problem that can’t be fixed by restoring default file security contexts, you should check to see if an available SELinux boolean covers your use case.

You can use getsebool -a to retrieve a list of available booleans on your system and then use setsebool to enable or disable them. Alternatively, you can use the semanage tool to see more detailed information about available booleans. Examples of SELinux booleans include:

  • use_nfs_home_dirs: Support NFS home directories.
  • httpd_can_network_connect: Allow HTTPD scripts and modules to connect to the network.
  • ftpd_full_access: Allow full filesystem access over FTP.

Rewriting the Policy

If fixing security contexts and enabling booleans hasn’t worked, ask yourself if you’re doing something abnormal. “Abnormal” in this context might include running a service on a nonstandard port, serving web files from an unconventional location, or moving config files out of their default directory. If you are, there’s a good chance your system’s default SELinux policy won’t cover your use case.

Before you proceed, you should think hard about what benefit you’re getting from running a nonstandard configuration. Standards exist for good reasons: troubleshooting is easier, malicious activity is simpler to detect, and applications can be configured to behave more predictably. With that said, there’s plenty of vendor software out there that relies on an “abnormal” configuration to work properly.

If you’ve evaluated your configuration and decided to proceed, you have two options. First, you may have discovered a bug in your platform’s SELinux policy, which means you should submit a bug report so that the policy can be fixed upstream. This is the course I ended up pursuing for the OpenDKIM issue mentioned above, and Red Hat updated the upstream policy after a few months.

Alternatively, you can write and compile a custom SELinux policy module. This is not as difficult as it sounds, as audit2allow can generate SELinux modules directly from audit log entries. A brief description of how to make use of the audit log is below, but a full explanation is beyond the scope of this post.

The Audit Log

By default, SELinux violations are logged to the audit log at /var/log/audit/audit.log. The best way to troubleshoot potential SELinux issues is to consult the audit log, but the default log format is not particularly user-friendly and raw entries are not always easy to understand. Instead of reading the audit log file directly, you can search the log with the ausearch tool or generate comprehensive, human-readable reports from it with the sealert tool. A full description of how to use those programs is provided by the documents in the “Read More” section at the bottom of this post.

Wrapping Up

SELinux has been around for a long time, and many mainstream Linux distributions now ship with robust SELinux policies that cover a range of use cases. Additionally, configuration management tools like Puppet can automatically set SELinux contexts for you and help you avoid inadvertently mislabeling files.

That said, the default SELinux policy can’t possibly cover all possible use cases, so you may still need to enable SELinux booleans or compile custom policy modules to make SELinux work for you. In any case, you should avoid disabling it outright, especially if you’re running a derivative of Fedora such as RHEL or CentOS where SELinux is intended to be the primary form of mandatory access control.

Read More

The banner image for this post was created by The Worlds Beyond.

Updated Posted by Arnon Erba in How-To Guides on .

On Windows and macOS, Stata can be configured to check for updates automatically with the set update_query command. However, there are a few drawbacks to this approach.

For one, this feature isn’t present on the Linux version of Stata. For two, this command doesn’t actually update Stata — it just enables update notifications. Stata will still need to be manually updated by someone with the permission to do so.

If you’re running Stata on a standalone Linux server or an HPC cluster, you may be interested in having Stata update itself without any user interaction. This is especially useful if Stata users do not have permission to update the software themselves, as is often the case on shared Linux systems.

We can enable true automatic updates with a cron job and a Stata batch mode hack:

0 0 * * 0 echo 'update all' | /usr/local/stata16/stata > /dev/null

Adding this line to root’s crontab will cause the update all command to be run every Sunday at 12am. Standard output is piped to /dev/null to prevent cron from sending unnecessary emails.

As always, think carefully before enabling automatic updates for mission-critical pieces of software. However, this approach can save time over updating Stata manually.

Updated Posted by Arnon Erba in How-To Guides on .

Ubuntu has been using update-motd as a MOTD (Message of the Day) generator for several years. Some of the default messages — such as the number of available security patches — can be helpful, but not everyone likes being greeted by a barrage of text every time they log in to their server. In this article, we’ll explore how to adjust, disable, or replace the dynamic MOTD in Ubuntu.

Before You Begin

If you’d rather work with update-motd than turn it off, detailed documentation for changing its output is available in the man page for update-motd. Essentially, the dynamic MOTD is generated by a collection of executable scripts found in the /etc/update-motd.d/ directory. These scripts can be updated, removed, or reordered, and new scripts can be added.

Disabling the Dynamic MOTD

While Ubuntu does not provide a way to directly uninstall update-motd, it is possible to disable it by adjusting a few PAM options. Two lines, found in both /etc/pam.d/login and /etc/pam.d/sshd, control how update-motd runs on login:

session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate

Commenting out those lines in both files will prevent the pam_motd.so module from being loaded and will completely disable the dynamic MOTD.

Bonus Section: Enabling a Static MOTD

If you still want a message printed when you log in via SSH, you can configure OpenSSH to display a traditional static MOTD. From the man page for sshd_config:

PrintMotd
Specifies whether sshd should print /etc/motd when a user logs in interactively. (On some systems it is also printed by the shell, /etc/profile, or equivalent.) The default is “yes”.

Ubuntu disables this option by default and incorporates /etc/motd into its dynamic generator, but we can re-enable the option to make /etc/motd work again. Add or uncomment the following line in /etc/ssh/sshd_config and restart the OpenSSH daemon to have OpenSSH print /etc/motd on login:

PrintMotd yes

Sources

Updated Posted by Arnon Erba in How-To Guides on .

If you have a recent business-class Dell PC with TPM version 1.2, you may be able to upgrade it to TPM version 2.0. Several Dell models are capable of switching between TPM version 1.2 and 2.0 provided a few conditions are met.

Prerequisites

First, your PC must support switching to TPM 2.0. Most supported models are listed in the “Compatible Systems” section of the instructions for the Dell TPM 2.0 Firmware Update Utility itself. If you can’t find your system in that list, there’s a good chance it isn’t supported by this process.

Second, your PC should be configured in UEFI Boot Mode instead of Legacy Boot Mode. Switching boot modes generally requires a reinstallation of Windows, so it’s best to choose UEFI from the start.

Finally, while optional, it’s recommended that you update your BIOS to the latest version. You can get your serial number by running wmic bios get serialnumber from within PowerShell or Command Prompt. Then, you can provide this serial number to the Dell support website to find the latest drivers and downloads for your PC.

Once you’re ready, you can clear the TPM and run the firmware update utility. However, since Windows will automatically take ownership of a fresh TPM after a reboot by default, we have to take some additional steps to make sure the TPM stays deprovisioned throughout the upgrade process.

Step-By-Step Instructions

  1. First, launch a PowerShell window with administrative privileges. Then, run the following command to disable TPM auto-provisioning (we’ll turn it back on later):
    PS C:\> Disable-TpmAutoProvisioning 
  2. Next, reboot, and enter the BIOS settings. Navigate to “Security > TPM 1.2/2.0 Security”. If the TPM is turned off or disabled, enable it. Otherwise, click the “Clear” checkbox and select “Yes” to clear the TPM settings.
  3. Then, boot back to Windows, and download the TPM 2.0 Firmware Update Utility. Run the package, which will trigger a reboot similar to a BIOS update.
  4. When your PC boots back up, run the following command in another elevated PowerShell window:
    PS C:\> Enable-TpmAutoProvisioning 
  5. Reboot your PC again so that Windows can automatically provision the TPM. While you’re rebooting, you can take this opportunity to enter the BIOS and ensure that Secure Boot is enabled (Legacy Option ROMs under “General > Advanced Boot Options” must be disabled first).
  6. Finally, check tpm.msc or the Windows Security app to ensure that your TPM is active and provisioned.

References

Updated Posted by Arnon Erba in How-To Guides on .

Let’s Encrypt has steadily improved since its public debut in late 2015. Certbot, the most popular Let’s Encrypt client, is available for a wide variety of Linux distributions, making it easy to integrate Let’s Encrypt with many common web server configurations. However, because of this broad support, and because Certbot offers many internal options, there are several different ways to integrate Certbot with Nginx.

If you run Certbot with the --nginx flag, it will automatically make whatever changes are necessary to your Nginx configuration to enable SSL/TLS for your website. On the other hand, if you’d prefer to handle the Nginx configuration separately, you can run Certbot with the --webroot flag. In this mode, Certbot will still fetch a certificate, but it’s up to you to integrate it with Nginx.

Once you’ve obtained certificates from Let’s Encrypt, you’ll need to set up a method to automatically renew them, since they expire after just 90 days. On Ubuntu 18.04, the “certbot” package from the Ubuntu repositories includes an automatic renewal framework right out of the box. However, you’ll also need to reload your web server so it can actually serve the renewed certificates. The packaged renewal scripts on Ubuntu won’t restart Nginx unless you used the --nginx flag to request certificates in the first place. If you’re using --webroot or some other method, there’s an additional important step to take.

Automatically Restarting Nginx

On Ubuntu 18.04, Certbot comes with two automated methods for renewing certificates: a cron job, located at /etc/cron.d/certbot, and a systemd timer. The cron job is set to run every 12 hours but only takes effect if systemd is not active. Instead, the systemd timer (visible in the output of systemctl list-timers) works in tandem with the certbot systemd service to handle certificate renewals.

Instead of modifying the cron job or the systemd service, we can change Certbot’s renewal behavior by editing a config file. Add the following line to /etc/letsencrypt/cli.ini:

deploy-hook = systemctl reload nginx

This will cause Certbot to reload Nginx after it renews a certificate. With the deploy-hook option, Certbot will only reload Nginx when a certificate is actually renewed, not every time the Certbot renewal check runs. Ed: A previous version of this post recommended using renew-hook instead. This option has been superseded by deploy-hook.

You can verify that your changes are working by running certbot renew --dry-run. This will not renew any certificates but will tell you if your deploy-hook command is being picked up by Certbot.

A Little Background Information

If you’re new to Let’s Encrypt, and you’re wondering why you need to automatically renew your certificates and restart your web server when you get new ones, it’s a good thing you’re here. While “traditional” SSL/TLS certificates are manually requested and can be valid for up to two years, certificates from Let’s Encrypt are only valid for 90 days. In their blog post, the Let’s Encrypt team explains their reasoning behind such short certificate lifetimes: they limit the time period for damage to be caused by stolen keys or mis-issued certificates, and they heavily encourage automation, which is key to the success of the Let’s Encrypt model.

This means that you’re going to need to automatically renew your certificates in order to take full advantage of Let’s Encrypt. Fortunately, since this is how Let’s Encrypt is designed to work, auto-renewal functionality is built directly into Certbot, the recommended ACME client for Let’s Encrypt.

A slightly less obvious question is why you’d want to automatically restart your web server as well. The answer is simple: web servers, such as Apache or Nginx, don’t read your SSL/TLS certificates directly from disk every time they need them. Instead, they load them into memory along with the rest of the web server configuration. This is great, and perfectly normal, since reading the certificates from disk would be horribly inefficient. However, it means that updating (or renewing) a certificate with Let’s Encrypt won’t directly change the certificate that Apache/Nginx serves when a page is requested. Instead, the web server must be restarted in order to load the new certificate into memory.

Sources

Updated Posted by Arnon Erba in How-To Guides on .

Julia, the fast-moving and popular open source programming language for scientific computing, allows for the usage of multiple BLAS implementations. Pre-built Julia binaries ship with OpenBLAS due to licensing restrictions surrounding the Intel Math Kernel Library, but by building Julia from source you can replace OpenBLAS with a free copy of MKL obtained from Intel’s Yum or Apt repositories. As of the time of writing, there are instructions for this process on the Julia GitHub repository.

Determining the BLAS Vendor

Regardless of which BLAS implementation you choose, it is nice to check that Julia is actually using the one you want, especially if you are building Julia from source. In recent versions of Julia, you can run the following two commands in the Julia REPL to find your BLAS vendor:

julia> using LinearAlgebra
julia> LinearAlgebra.BLAS.vendor()

The second command should output a string indicating which BLAS implementation your Julia installation is currently built with.