menu

Posts Tagged #Security

Posted by Arnon Erba in How-To Guides on .

SELinux has a well-earned reputation for being hard to use. It’s infamous for causing strange, illogical faults that can’t be fixed via normal troubleshooting routines, and, as a consequence, many guides and blog posts recommend disabling it outright. However, SELinux is a great way to secure and harden Linux systems, and with a few simple steps it’s possible to fix most common problems you might encounter while using it.

Examples of Common Issues

Let’s start by looking at a few issues I’ve had in the past that turned out to be caused by SELinux:

  1. A user could no longer log in with an SSH key after their home directory was restored from a backup. Their authorized_keys file was configured correctly but was being ignored by SSH.
  2. A service wouldn’t start after replacing its config file with a modified version that had been uploaded via SFTP. The service complained about the config file being inaccessible even though its permissions were set correctly.
  3. Postfix couldn’t communicate with OpenDKIM when the latter was set to use a UNIX socket instead of a TCP/IP socket. The Postfix user was in the correct security group and the socket was configured correctly.

Without a general understanding of how SELinux works, you might guess that the issues above were caused by bad file permissions. That’s why it’s important to understand SELinux and to identify it as a possible culprit as early as possible in the troubleshooting process.

What is SELinux, Exactly?

At its core, SELinux is a set of rules that tell applications what they can and can’t do. SELinux is separate from the regular Linux file permissions model and is therefore able to protect against issues like misconfigured permissions or privilege escalation exploits. In order for an operation to succeed on an SELinux-enabled system, it must be permitted by file permissions as well as by the active SELinux policy.

Regular file permissions are a form of discretionary access control, or DAC. On the other hand, SELinux is a form of mandatory access control, or MAC. With DAC, a user or service can do anything they have permission to do, even if it’s something undesirable or dangerous. With MAC, malicious or dangerous actions can be stopped, even if a DAC policy would otherwise permit them to happen.

Here’s an example of why you’d want to keep SELinux enabled. Normally, Apache shouldn’t be able to read /etc/shadow, and the default file permissions prevent that from happening. However, if those permissions were misconfigured and Apache was configured to serve files from /etc, it would be possible for anyone with a web browser to download /etc/shadow. A properly configured SELinux policy would override both misconfigurations and prevent Apache from serving sensitive system files from /etc.

Putting Things in Context

Extra protection is great, but what happens when SELinux interferes when it shouldn’t? If SELinux is interfering with something “normal” that should otherwise work, chances are you have one simple problem: incorrect file security contexts. Security contexts are how SELinux categorizes files and decides which applications can access them. By default, security contexts are applied to files based on their location. For example, files in home directories get different security contexts from files in /etc or /tmp.

You can inspect a file’s security context with ls -Z, but you’re probably better off using restorecon to reset contexts to their default values if you suspect a problem. To save time, you can run restorecon -rv /path/to/directory to recursively reset the security contexts for an entire directory. If things are bad enough, you can relabel your entire filesystem by running touch /.autorelabel and then rebooting.

The restorecon command was the solution to problems #1 and #2 from the list at the beginning of this post. Incorrect security contexts can be applied when files are restored from a backup or copied from a nonstandard location.

Adjusting the Policy

In most mainstream Linux distributions, the default SELinux policy is carefully crafted by a group of upstream maintainers. Creating a perfect one-size-fits-all policy is impossible, so the maintainers provide built-in policy exceptions in the form of SELinux booleans. SELinux booleans can be easily enabled or disabled to cover common use cases where the default SELinux policy falls short. If you have an SELinux problem that can’t be fixed by restoring default file security contexts, you should check to see if an available SELinux boolean covers your use case.

You can use getsebool -a to retrieve a list of available booleans on your system and then use setsebool to enable or disable them. Alternatively, you can use the semanage tool to see more detailed information about available booleans. Examples of SELinux booleans include:

  • use_nfs_home_dirs: Support NFS home directories.
  • httpd_can_network_connect: Allow HTTPD scripts and modules to connect to the network.
  • ftpd_full_access: Allow full filesystem access over FTP.

Rewriting the Policy

If fixing security contexts and enabling booleans hasn’t worked, ask yourself if you’re doing something abnormal. “Abnormal” in this context might include running a service on a nonstandard port, serving web files from an unconventional location, or moving config files out of their default directory. If you are, there’s a good chance your system’s default SELinux policy won’t cover your use case.

Before you proceed, you should think hard about what benefit you’re getting from running a nonstandard configuration. Standards exist for good reasons: troubleshooting is easier, malicious activity is simpler to detect, and applications can be configured to behave more predictably. With that said, there’s plenty of vendor software out there that relies on an “abnormal” configuration to work properly.

If you’ve evaluated your configuration and decided to proceed, you have two options. First, you may have discovered a bug in your platform’s SELinux policy, which means you should submit a bug report so that the policy can be fixed upstream. This is the course I ended up pursuing for the OpenDKIM issue mentioned above, and Red Hat updated the upstream policy after a few months.

Alternatively, you can write and compile a custom SELinux policy module. This is not as difficult as it sounds, as audit2allow can generate SELinux modules directly from audit log entries. A brief description of how to make use of the audit log is below, but a full explanation is beyond the scope of this post.

The Audit Log

By default, SELinux violations are logged to the audit log at /var/log/audit/audit.log. The best way to troubleshoot potential SELinux issues is to consult the audit log, but the default log format is not particularly user-friendly and raw entries are not always easy to understand. Instead of reading the audit log file directly, you can search the log with the ausearch tool or generate comprehensive, human-readable reports from it with the sealert tool. A full description of how to use those programs is provided by the documents in the “Read More” section at the bottom of this post.

Wrapping Up

SELinux has been around for a long time, and many mainstream Linux distributions now ship with robust SELinux policies that cover a range of use cases. Additionally, configuration management tools like Puppet can automatically set SELinux contexts for you and help you avoid inadvertently mislabeling files.

That said, the default SELinux policy can’t possibly cover all possible use cases, so you may still need to enable SELinux booleans or compile custom policy modules to make SELinux work for you. In any case, you should avoid disabling it outright, especially if you’re running a derivative of Fedora such as RHEL or CentOS where SELinux is intended to be the primary form of mandatory access control.

Read More

The banner image for this post was created by The Worlds Beyond.

Updated Posted by Arnon Erba in News on .

Update 1/16/20: According to Namecheap, the issues with DNSSEC have been resolved as of 2:00am EST (11:00 PM PST).

Have a domain registered at Namecheap with DNSSEC turned on? Now might be a good time to check if it still resolves.

Since at least 11:21pm Eastern Standard Time (8:21pm Pacific Standard Time) today, DNSSEC for domain names on Basic/PremiumDNS has been broken. So far, the issue appears to be caused by an expired signing key, but according to the latest status update “there is no current timeline for resolution of this issue”. This happens to be a fairly serious issue as DNSSEC validation for affected domain names will fail and cause websites and services to become inaccessible to some users.

The full text of the status update is copied below. This post will be updated if the status of the incident changes.

We are currently experiencing temporary technical issues with DNSSEC for domain names on Basic/PremiumDNS. If your domain name has DNSSEC option enabled, it may cause DNS performance issues. Unfortunately, there is no current timeline for resolution of this issue. We will keep you updated on the progress. Meanwhile, please contact our Support Team for assistance and more details. Please accept our sincere apologies for the inconvenience. Thank you for your continued support and patience.

Oh well, maybe no one is using DNSSEC anyway.

Updated Posted by Arnon Erba in News on .

If you saw a headline earlier this week about a critical security flaw in VLC media player, you may not have gotten the whole story. In fact, the issue is not nearly as serious as it originally seemed.

About a month ago, a user opened a bug report for a crash in VLC caused by a specifically crafted mp4 file. With the cause of the crash still undetermined, MITRE assigned the bug a CVE identifier and gave it a “critical” score of 9.8.

With the bug’s true cause and impact still undetermined, Germany’s CERT-Bund issued an alert of their own warning of a critical flaw in VLC. Worse, because the now several-week-old VLC bug report did not list any significant progress by the VideoLAN team, CERT-Bund announced that no patch was available. The alert kicked off a flurry of other news articles that culminated in a misguided warning from Gizmodo to completely uninstall VLC.

Not a VLC Bug

The only problem was that there was never anything wrong with VLC in the first place. The crash described in the bug report was the result of a vulnerability in libEBML, a third-party library that VLC depends on. However, according to a thread on Twitter from the VideoLAN team, a patched version of libEBML has been shipped with VLC for over a year. It appears the bug report was generated from a Linux system with an older, vulnerable version of libEBML installed.

With that in mind, the CVE score was lowered to “medium” and the report in the VLC bug tracker was closed. Ubuntu released an update for libEBML, and Gizmodo withdrew their doomsday-level announcement. In the end, no patch for VLC is currently required, though some Linux distributions may need to make an updated version of libEBML available.

Read More

Updated Posted by Arnon Erba in News on .

This morning, Apple released iOS 12.1.4, an incremental update that fixes several security issues including the Group FaceTime eavesdropping bug from last month. The Group FaceTime service has also been re-enabled for devices running iOS 12.1.4 or higher.

The eavesdropping bug, discovered accidentally in January by a 14-year-old from Arizona, caused certain Group FaceTime calls to automatically connect even if the recipient did not answer the call. This flaw allowed macOS or iOS users to be eavesdropped on by any malicious FaceTime user. The bug was disclosed privately to Apple by the teen and his mother at least a week before it went public, but it appears that Apple did not clearly or immediately respond to the bug reports they filed.

Shortly after the bug went viral on January 28th, Apple took the Group FaceTime service offline as a temporary fix before a patch could be released. On February 1st, with Group FaceTime still offline, Apple announced that the bug had been fixed server-side and that a client-side software update to fully resolve the issue would be available the week of February 4th.

Read More

Updated Posted by Arnon Erba in Server Logs Explained on .

In today’s world, the exhaustion of IPv4 addresses and the slow adoption of IPv6 means that publicly routable IPv4 addresses are in high demand. It also means that when you spin up a cloud-based virtual private server using a service like Digital Ocean, Linode, or Amazon Web Services (AWS), you’ll almost certainly get an IPv4 address that was previously in use by someone else. In the worst case, your new IP address might be on some blacklists, but the most likely situation is that you’ll get some extra “background noise” in your server logs.

The Logs

167.114.0.63 - - [06/May/2018:03:10:08 +0000] "GET /0f0qa0a/captive_portal.html HTTP/1.1" 404 152 "-" "Go-http-client/1.1"
37.187.139.66 - - [06/May/2018:03:10:41 +0000] "GET /0f0qa0a/captive_portal.html HTTP/1.1" 404 152 "-" "Go-http-client/1.1"

These Nginx logs were pulled from a fresh virtual private server that I created with a new-to-me IPv4 address. If you’re curious, my original Server Logs Explained post contains a breakdown of the log format I’m using, but I’ll cover what these log excerpts mean in this post as well.

Essentially, two completely different IP addresses performed an HTTP GET request for the same resource, /0f0qa0a/captive_portal.html. Unable to provide this mysterious file, my server responded with 404 Not Found. This pair of log entries became much more interesting when I noticed that my server kept getting the same two requests from the same two remote IP addresses every few seconds.

Some Detective Work

First of all, these log entries are completely harmless. Anyone can request any random page from a web server, and the server should return a 404 response if the page does not exist. At this point, curiosity is the only reason to continue exploring the source of the two requests.

A cursory WHOIS lookup on both IPs reveals that they are owned by OVH Hosting, a French company that provides cloud-based hosting services. It’s similar to the other cloud hosting companies I mentioned at the beginning of this post. It isn’t a big leap to assume that both IPs belong to virtual servers hosted by OVH, so with that assumption in mind, let’s move forward.

Next, a reverse DNS (rDNS) lookup on each IP address yields the following interesting results:

167.114.0.63 maps to prometheus-nodes-ca-ovh.afdevops.com
37.187.139.66 maps to prometheus-nodes-eu-ovh.afdevops.com

Right away, there’s “ovh” in the hostnames for the two servers, which seems to confirm that OVH Hosting was a good guess. There’s also something else interesting about these results: 167.114.0.63 is a Canadian IP address, and it has “ca” in its hostname. On top of that, 37.187.139.66 is a French IP address, and it has “eu” in its hostname. It’s beginning to look like these two servers are part of some company’s public-facing server infrastructure.

Let’s look at the far left section of the hostnames, prometheus-nodes. There’s no law that dictates how you should name your servers, but it is pretty common to give them logical names that correspond to the software that runs on them. With that in mind, what is “prometheus”?

Prometheus is a real piece of software. Its GitHub page describes it as “a systems and service monitoring system”, which would explain the persistent requests in my server logs. Monitoring solutions work by constantly checking a service, evaluating the responses they receive, and notifying administrators if something looks wrong. It seems reasonable to conclude that some important service was being hosted by the previous owner of my IP address, and someone forgot to reconfigure their monitoring solution after decommissioning the server.

There’s still more we can learn from the reverse hostnames. A simple Google search for one of the hostnames turns up this GitHub issue from February, where a bunch of information is listed that confirms that the server runs Prometheus. On top of that, the user who opened the issue claims to work for a company called “AnchorFree”.

AnchorFree — could that have anything to do with the afdevops.com portion of the hostnames? Even though there’s no public-facing website associated with that domain, Googling for “afdevops.com” turns up this user profile on Docker Hub. Guess what’s listed in the bio for that user? Anchorfree, Inc.

There’s something else, too. A WHOIS lookup on “afdevops.com” reveals that it is registered under the real name of one of the co-founders of AnchorFree. That was easy. Too bad GDPR is killing WHOIS at the end of this month. (Ed: This was accurate when this post was drafted in 2018.)

Now that we know that both servers are owned by AnchorFree, let’s figure out what AnchorFree actually is. Wikipedia and AnchorFree’s actual website both confirm something interesting — AnchorFree is the parent company behind Hotspot Shield, a fairly well-known “freemium” VPN app.

The Conclusion

Nothing in this post is particularly revealing, but it’s interesting to consider what happens when IP addresses owned by cloud service providers get reused. If you incorporate cloud service provider IP addresses into your server infrastructure, it’s important to remember that those addresses may be recycled for other customers in the future. If you set up important services or access control lists based on IP addresses you don’t own, it’s possible to introduce problems that won’t become apparent until later. In cases like this, it might be as simple as having your monitoring solution check the wrong IP address for a few days.

On the other hand, consider what might happen if you decommissioned a server but forgot to remove its forward DNS entry. If a malicious actor gained control of its old IP address, they could set up a phishing website that would appear to be hosted on your domain. Change control and detailed documentation are important when it comes to using public cloud services.

Update (1/9/20)

A reader contacted me in October 2019 to let me know he had been hit by similar requests from two additional IP addresses:

167.114.65.142, or prometheus-longterm-ca-ovh.afdevops.com
167.114.119.75, or prometheus-dev-ca-ovh.afdevops.com

I also happened across a completely unrelated blog post from 2018 that documents the same strange requests for /0f0qa0a/captive_portal.html that were the basis for this post. In that case, the author chose to block the repetitive requests with his firewall.

Finally, on a whim, I tried accessing one of the IP addresses in a web browser and was taken to a login page asking for AnchorFree SSO credentials. By itself, that isn’t particularly surprising, since we already know that AnchorFree runs those servers. In any case, it would be great if AnchorFree did a little auditing of their public cloud infrastructure.

Updated Posted by Arnon Erba in News on .

As announced in February this year, Google Chrome’s design is being evolved to more clearly indicate to users that websites using plain HTTP are not loaded securely and that HTTPS connections should be expected instead. Today, Chrome is pushing a change that affects all HTTP sites worldwide: starting in version 68, Chrome will display a “Not Secure” warning in the address bar for all sites loaded over HTTP.

This isn’t the first change Chrome has made to clearly indicate that HTTP is not secure. Chrome has been marking HTTP traffic as “Not Secure” in Incognito mode as far back as version 62. The “Not Secure” warning has also been appearing for HTTP sites in Chrome’s normal mode when a page contains a password field or when the user interacts with any input field.

Although Chrome has taken the lead, Mozilla Firefox is also on board with the effort to visually flag HTTP pages as insecure. Firefox currently displays an address bar warning for HTTP sites that contain login forms and displays a visible warning message next to login forms that are served insecurely.

What’s Next

The future will bring more changes for the way Chrome visually handles HTTP and HTTPS connections. As I covered back in May, Chrome is scheduled to remove the “Secure” text from HTTPS connections in September with the release of Chrome 69. One month later, in October 2018, Chrome will color the HTTP “Not Secure” warning red when users enter data into insecure sites in Chrome 70. Ed: A previous version of this post inaccurately reflected the circumstances in which the “Not Secure” warning will be colored red in Chrome 70. The color will only change when users enter data on HTTP pages.