menu

Posted on by Arnon Erba in How-To Guides

If you have a recent business-class Dell PC with TPM version 1.2, you may be able to upgrade it to TPM version 2.0. Several Dell models are capable of switching between TPM version 1.2 and 2.0 provided a few conditions are met.

Prerequisites

First, your PC must support switching to TPM 2.0. Dell provides a partial list of TPM 2.0-capable models, while more models are listed in the “Compatible Systems” section of the instructions for the Dell TPM 2.0 Firmware Update Utility itself. If you can’t find your system in either of those lists, there’s a good chance it isn’t supported by this process.

Second, your PC should be configured in UEFI Boot Mode instead of Legacy Boot Mode. Switching boot modes generally requires a reinstallation of Windows, so it’s best to choose UEFI from the start.

Finally, while optional, it’s recommended that you update your BIOS to the latest version. You can get your serial number by running wmic bios get serialnumber from within PowerShell or Command Prompt. Then, you can provide this serial number to the Dell support website to find the latest drivers and downloads for your PC.

Once you’re ready, you can clear the TPM and run the firmware update utility. However, since Windows will automatically take ownership of a fresh TPM after a reboot by default, we have to take some additional steps to make sure the TPM stays deprovisioned throughout the upgrade process.

Step-By-Step Instructions

  1. First, launch a PowerShell window with administrative privileges. Then, run the following command to disable TPM auto-provisioning (we’ll turn it back on later):
    PS C:\> Disable-TpmAutoProvisioning 
  2. Next, reboot, and enter the BIOS settings. Navigate to “Security > TPM 1.2/2.0 Security”. If the TPM is turned off or disabled, enable it. Otherwise, click the “Clear” checkbox and select “Yes” to clear the TPM settings.
  3. Then, boot back to Windows, and download the TPM 2.0 Firmware Update Utility. Run the package, which will trigger a reboot similar to a BIOS update.
  4. When your PC boots back up, run the following command in another elevated PowerShell window:
    PS C:\> Enable-TpmAutoProvisioning 
  5. Reboot your PC again so that Windows can automatically provision the TPM. While you’re rebooting, you can take this opportunity to enter the BIOS and ensure that Secure Boot is enabled (Legacy Option ROMs under “General > Advanced Boot Options” must be disabled first).
  6. Finally, check tpm.msc or the Windows Security app to ensure that your TPM is active and provisioned.

References

Posted on by Arnon Erba in News

Update 4/29/19: The bug affecting printing in Google Calendar appears to be fixed.

Trying to print your Google Calendar but keep getting a broken print preview window? Try enabling the “Show weekends” option under the Day/Week/Month/Year dropdown menu. If you don’t, you may be unable to print your calendar from any view.

It looks like this is a server-side issue, since a 500 error is logged to the browser console when the print preview window fails to load. Hopefully, Google will release a fix for Calendar in the near future, as the issue has already been reported on the Calendar forums:

Posted on by Arnon Erba in News

This morning, Apple released iOS 12.1.4, an incremental update that fixes several security issues including the Group FaceTime eavesdropping bug from last month. The Group FaceTime service has also been re-enabled for devices running iOS 12.1.4 or higher.

The eavesdropping bug, discovered accidentally in January by a 14-year-old from Arizona, caused certain Group FaceTime calls to automatically connect even if the recipient did not answer the call. This flaw allowed macOS or iOS users to be eavesdropped on by any malicious FaceTime user. The bug was disclosed privately to Apple by the teen and his mother at least a week before it went public, but it appears that Apple did not clearly or immediately respond to the bug reports they filed.

Shortly after the bug went viral on January 28th, Apple took the Group FaceTime service offline as a temporary fix before a patch could be released. On February 1st, with Group FaceTime still offline, Apple announced that the bug had been fixed server-side and that a client-side software update to fully resolve the issue would be available the week of February 4th.

Read More

Posted on by Arnon Erba in Server Logs Explained

In today’s world, the exhaustion of IPv4 addresses and the slow adoption of IPv6 means that publicly routable IPv4 addresses are in high demand. It also means that when you spin up a cloud-based virtual private server using a service like Digital Ocean, Linode, or Amazon Web Services (AWS), you’ll almost certainly get an IPv4 address that was previously in use by someone else. In the worst case, your new IP address might be on some blacklists, but the most likely situation is that you’ll get some extra “background noise” in your server logs.

The Logs

167.114.0.63 - - [06/May/2018:03:10:08 +0000] "GET /0f0qa0a/captive_portal.html HTTP/1.1" 404 152 "-" "Go-http-client/1.1"
37.187.139.66 - - [06/May/2018:03:10:41 +0000] "GET /0f0qa0a/captive_portal.html HTTP/1.1" 404 152 "-" "Go-http-client/1.1"

These Nginx logs were pulled from a fresh virtual private server that I created with a new-to-me IPv4 address. If you’re curious, my original Server Logs Explained post contains a breakdown of the log format I’m using, but I’ll cover what these log excerpts mean in this post as well.

Essentially, two completely different IP addresses performed an HTTP GET request for the same resource, /0f0qa0a/captive_portal.html. Unable to provide this mysterious file, my server responded with 404 Not Found. This pair of log entries became much more interesting when I noticed that my server kept getting the same two requests from the same two remote IP addresses every few seconds.

Some Detective Work

First of all, these log entries are completely harmless. Anyone can request any random page from a web server, and the server should return a 404 response if the page does not exist. At this point, curiosity is the only reason to continue exploring the source of the two requests.

A cursory WHOIS lookup on both IPs reveals that they are owned by OVH Hosting, a French company that provides cloud-based hosting services. It’s similar to the other cloud hosting companies I mentioned at the beginning of this post. It isn’t a big leap to assume that both IPs belong to virtual servers hosted by OVH, so with that assumption in mind, let’s move forward.

Next, a reverse DNS (rDNS) lookup on each IP address yields the following interesting results:

167.114.0.63 maps to prometheus-nodes-ca-ovh.afdevops.com
37.187.139.66 maps to prometheus-nodes-eu-ovh.afdevops.com

Right away, there’s “ovh” in the hostnames for the two servers, which seems to confirm that OVH Hosting was a good guess. There’s also something else interesting about these results: 167.114.0.63 is a Canadian IP address, and it has “ca” in its hostname. On top of that, 37.187.139.66 is a French IP address, and it has “eu” in its hostname. It’s beginning to look like these two servers are part of some company’s public-facing server infrastructure.

Let’s look at the far left section of the hostnames, prometheus-nodes. There’s no law that dictates how you should name your servers, but it is pretty common to give them logical names that correspond to the software that runs on them. With that in mind, what is “prometheus”?

Prometheus is a real piece of software. Its GitHub page describes it as “a systems and service monitoring system”, which would explain the persistent requests in my server logs. Monitoring solutions work by constantly checking a service, evaluating the responses they receive, and notifying administrators if something looks wrong. It seems reasonable to conclude that some important service was being hosted by the previous owner of my IP address, and someone forgot to reconfigure their monitoring solution after decommissioning the server.

There’s still more we can learn from the reverse hostnames. A simple Google search for one of the hostnames turns up this GitHub issue from February, where a bunch of information is listed that confirms that the server runs Prometheus. On top of that, the user who opened the issue claims to work for a company called “AnchorFree”.

AnchorFree — could that have anything to do with the afdevops.com portion of the hostnames? Even though there’s no public-facing website associated with that domain, Googling for “afdevops.com” turns up this user profile on Docker Hub. Guess what’s listed in the bio for that user? Anchorfree, Inc.

There’s something else, too. A WHOIS lookup on “afdevops.com” reveals that it is registered under the real name of one of the co-founders of AnchorFree. That was easy. Too bad GDPR is killing WHOIS at the end of this month. (Ed: This was accurate when this post was drafted in 2018.)

Now that we know that both servers are owned by AnchorFree, let’s figure out what AnchorFree actually is. Wikipedia and AnchorFree’s actual website both confirm something interesting — AnchorFree is the parent company behind Hotspot Shield, a fairly well-known “freemium” VPN app.

The Conclusion

Nothing in this post is particularly revealing, but it’s interesting to consider what happens when IP addresses owned by cloud service providers get reused. If you incorporate cloud service provider IP addresses into your server infrastructure, it’s important to remember that those addresses may be recycled for other customers in the future. If you set up important services or access control lists based on IP addresses you don’t own, it’s possible to introduce problems that won’t become apparent until later. In cases like this, it might be as simple as having your monitoring solution check the wrong IP address for a few days.

On the other hand, consider what might happen if you decommissioned a server but forgot to remove its forward DNS entry. If a malicious actor gained control of its old IP address, they could set up a phishing website that would appear to be hosted on your domain. Change control and detailed documentation are important when it comes to using public cloud services.

Posted on by Arnon Erba in How-To Guides

Let’s Encrypt has steadily improved since its public debut in late 2015. Certbot, the most popular Let’s Encrypt client, is available for a wide variety of Linux distributions, making Let’s Encrypt an easy integration for most common web server configurations. However, because of this broad support, and because Certbot offers many internal options, there are several different ways to integrate Certbot with Nginx.

If you run Certbot with the --nginx flag, it will automatically make whatever changes are necessary to your Nginx configuration to enable SSL/TLS for your website. On the other hand, if you’d prefer to handle the Nginx configuration separately, you can run Certbot with the --webroot flag. In this mode, Certbot will still fetch a certificate, but it’s up to you to integrate it with Nginx.

Once you’ve obtained certificates from Let’s Encrypt, you’ll need to set up a method to automatically renew them, since they expire after just 90 days. On Ubuntu 18.04, the “certbot” package from the Ubuntu repositories includes an automatic renewal framework right out of the box. However, you’ll also need to force your web server to reload its configuration so it can actually serve the renewed certificates (more on this below). The packaged renewal scripts on Ubuntu won’t restart Nginx unless you used the --nginx flag to request certificates in the first place. If you’re using --webroot or some other method, there’s an additional important step to take.

Automatically Restarting Nginx

On Ubuntu 18.04, Certbot comes with two automated methods for renewing certificates: a cron job, located at /etc/cron.d/certbot, and a systemd timer. The cron job is set to run every 12 hours but only takes effect if systemd is not active. Instead, the systemd timer (visible in the output of systemctl list-timers) works in tandem with the certbot systemd service to handle certificate renewals.

Instead of modifying the cron job or the systemd service, we can change Certbot’s renewal behavior by editing a config file. Add the following line to /etc/letsencrypt/cli.ini:

renew-hook = systemctl reload nginx

This will cause Certbot to reload Nginx after it renews a certificate. With the renew-hook option, Certbot will only reload Nginx when a certificate is actually renewed, not every time the Certbot renewal check runs.

A Little Background Information

If you’re new to Let’s Encrypt, and you’re wondering why you need to automatically renew your certificates and restart your web server when you get new ones, it’s a good thing you’re here. While “traditional” SSL/TLS certificates are manually requested and can be valid for up to two years, certificates from Let’s Encrypt are only valid for 90 days. In their blog post, the Let’s Encrypt team explains their reasoning behind such short certificate lifetimes: they limit the time period for damage to be caused by stolen keys or mis-issued certificates, and they heavily encourage automation, which is key to the success of the Let’s Encrypt model.

This means that you’re going to need to automatically renew your certificates in order to take full advantage of Let’s Encrypt. Fortunately, since this is how Let’s Encrypt is designed to work, auto-renewal functionality is built directly into Certbot, the recommended ACME client for Let’s Encrypt.

A slightly less obvious question is why you’d want to automatically restart your web server as well. The answer is simple: web servers, such as Apache or Nginx, don’t read your SSL/TLS certificates directly from disk every time they need them. Instead, they load them into memory along with the rest of the web server configuration. This is great, and perfectly normal, since reading the certificates from disk would be horribly inefficient. However, it means that updating (or renewing) a certificate with Let’s Encrypt won’t directly change the certificate that Apache/Nginx serves when a page is requested. Instead, the web server must be restarted in order to load the new certificate into memory.

Sources

Posted on by Arnon Erba in Meta

When I left Blogger for WordPress in 2016, I made a deliberate choice to leave some features behind. Part of the allure of WordPress was the opportunity to explore PHP and to take as much ownership as I could over the inner workings of my blog. Blogger, by nature, never provided the level of customizability I was searching for, but the the chance to write my own custom WordPress theme was too appealing to pass up. Suddenly, I had the ability to customize, rewrite, and (more often than not) break almost everything that governed how my content was displayed on the Web.

Taking control did not come without tradeoffs. As previously mentioned, I never implemented comments, and until my massive update in May of this year I had some minor SEO problems like poorly apportioned <h1> tags. However, I had a platform that allowed me to experiment with web technologies on a much larger scale than before.

That was fine, because that was the point. I don’t blog for ad revenue, or to become famous (though I have my doubts that running a technology blog is really the quickest path to Internet fame). Regardless, in the spirit of constant improvement, I’ve been slowly adding features to my blog as I feel it needs them.

Once I finished some SEO work and some performance optimization, the next task was to fix the comments. I’ve implemented the generic WordPress comments template and enabled the discussion section for everything but my archived posts. In fact, I can tell it’s working, because I’ve already blocked dozens of Russian spam comments. Please enjoy.