How to improve PageSpeed and WPT metrics on WordPress

On a previous post, I talked about changes I had to make to the way DNS is handled on CloudFlare. The whole reason for utilizing CloudFlare in the first place was to protect the site from DDoS attacks and to speed it up its responsiveness. CloudFlare has edge servers all over the world and that allows you, the site owner, to use them to provide your users with a faster experience on your site by reducing the latency between them (the users) and your application.

With that said, there are two popular web performance metrics to keep in mind; PageSpeed and WebPage Test. Your goal is to have as high a score as possible. PageSpeed provides you a numerical score up to 100 and they individualize it between mobile and desktop. WebPage Test (WPT) provide a letter-score for various metrics; A being the best and F being the worst.

My testing began with CloudFlare enabled and with this metrics baseline:

WebPage TestPageSpeed
First Byte Time: F
Keep-alive Enabled: A
Compress Transfer: A
Compress Images: A
Cache Static Content: F
Load Time: 3.0s
First Byte: 1.6s
Start Render: 3.0s
Mobile: 25
Desktop: 53

In my experience, in order to improve the metrics, I had to use two different solutions; local caching and JS/CSS optimizing.

The WP ecosystem is full of plugins. There are so many plugins that address the issues above. The trick is to find the simplest, lightest, plugin that can do the job. To address WPT metrics I chose Simple Cache. To address the PageSpeed metrics I used Autoptimize.

Simple Cache, like the name implies, is a simple caching plugin. There is very little in the form settings. You simply enable it and that’s it. By enabling Simple Cache I was able to reduce First Byte Time by 300%, Load Time and Start Render time by 200%.

My new WebPage Test metrics looked like this:

WPT Before Simple Cache WPT After Simple Cache
First Byte Time: F
Keep-alive Enabled: A
Compress Transfer: A
Compress Images: A
Cache Static Content: F
Load Time: 3.0s
First Byte: 1.6s
Start Render: 3.0s
First Byte Time: B
Keep-alive Enabled: A
Compress Transfer: A
Compress Images: A
Cache Static Content: C
Load Time: 1.5s
First Byte: 0.4s
Start Render: 1.1s

My PageSpeed metrics remained pretty static. They hovered around mobile 25-29 and desktop 55-61. That’s where Autoptimize came in. Autoptimize has more features than Simple Cache, but less than most other tools that perform the same functions (hence I like it). I spent a lot of time playing with Autoptimize and the simplest, most effective, settings were as follows:

Main (checked)Extra (checked)
Optimize HTML
Optimize JS Code
Aggregate JS Files
Optimize CSS Code
Aggregate CSS Files
Save aggregated script…
Also optimize for logged in user
Remove Google Fonts
Remove query strings …

With those settings I was able to improve PageSpeed’s mobile score by 150% and the desktop score by almost 90%.

PageSpeed Before AutoptimizerPageSpeed After Autoptimizer
Mobile: 25
Desktop: 53
Mobile: 63
Desktop: 99

One thing for which I could not find a solution, without breaking ads, was a PageSpeed error you get on mobile regarding Preload key requests. If you are using ads, the Google script you need to include on your pages cannot be modified to satisfy the recommendation from PageSpeed. I tried various options, but each option, while it resolved the suggestion on PageSpeed, resulted in broken ads on my pages. Somehow, this is not an issue on desktop.

Overall, using those two simple plugins I was able to improve the performance of the website by an average of 2x.

Let me know what you think and if it helps you make your site more responsive and better to use for your users.

Premature end of file errors when using CloudFlare and how to fix it.

On a separate project I have,, I had been trying to make use of CloudFlare for my CDN and caching needs. CloudFlare is easy enough to setup and in general a really good tool. After I was done with the setup, I noticed a speed improvement and all was well.

That was until I noticed my backend services were not working properly. The site is updated through a backend service which interacts with an API at The error I was getting the following:

[Fatal Error] :1:1: Premature end of file.

Followed by:

redstone.xmlrpc.XmlRpcException: The response could not be parsed

And with this sprinkled about

lineNumber: 1; columnNumber: 1; Premature end of file.

I knew the problem was CloudFlare-related because if I disabled the service, my backend services worked fine. One major problem I had with this error was I could not access the logs on CloudFlare (those seems to be part of an upgraded service), so I could not see what CloudFlare was getting. I also could not get much more from my backend logs. I was using a library and that’s all it was exposing.

With that in mind, I suspected the issue was around CloudFlare’s additional hop, its caching rules, and/or its security settings. Initially I thought there was nothing I could do about the hop, so I dug into the latter two. For the latter two I created a new page rule which excluded all of the caching and security features from the API end-point. I excluded everything. However, the issue remained.

The next thing I did was to create a firewall rule to achieve the same, but this also didn’t work.

Finally, after much frustration, I noticed that in the CloudFlare DNS settings, some entries were gray and others orange (the CloudFlare color) and that gray meant only DNS was passing through, but orange meant DNS and the CDN were set. Furthermore, if I enabled CloudFlare, but set the main DNS entry to gray (only DNS pass through), my backed service worked fine (which I think it’s the same as enabling/disabling at the main control panel).

That’s when I realized I could create a separate A record entry to use as a proxy for the API endpoint that CloudFlare’s additional hop was screwing up. I re-enabled CloudFlare in the standard way (set it to orange) for the main A record (, created a new A record,, without CDN, and created a subdomain ( at my hosting service point to In this way I was able to enable all of CloudFlare’s features on the main domain (, while still being able to access the API endpoint through the newly created A record.

With the above in place, the system is working as expected. Users going through directly get the benefits of CloudFlare, while my backend services can still get undisturbed access the APIs.

Backups are expensive, even when free. Be careful

The other day I looked at backing up our extensive local home storage. We don’t have the most, but we definitely have more than most. In aggregate we have about 8TB of local storage. For photos and important docs we use Dropbox, but that only gives me 1TB of online backup.

So when I found out about backblaze’s unlimited backup for only $60 per year, I jumped at the opportunity to try it for free. However, there was one issue, I am on an asymmetric internet connection and at a max of 6Mbps upload connection my back up of 8TB would take a long time, a very long time.

Like all journeys, they start with the first step and this one started with the first upload. The setup of backblaze was easy and it all went smoothly. Within minutes my backup was humming along and I “set it and forget it”.

A few days later I checked my bandwidth consumption and noted a huge uptick. I hadn’t calculated how much it would cost me in bandwidth since at 6Mbps I thought it was too slow to have an impact. Well, I was very wrong.

It turns out that at a steady connection of 6Mbps can consume, on average, about 60GB (yes, Bytes) of data per day. With a cap of 1TB per month, you can consume all of your month’s data just backing up at a mere 6Mbps in half of that.

What’s the moral of the story? Two things; your monthly cap includes uploads and downloads. Even a slow upload connection, when used steady, will utilize a lot more data than you expected.

Considering my cap of 1TB and my average monthly consumption of about 500GB of monthly data prior to my utilizing backblaze, I need to set my backblaze backup upload connection to about 1.5Mbps and hope not to go over my cap.

IFTTT Sample Service in Java

The other day I looked at IFTTT for tying up a couple of services together. Though IFTTT had the services I needed, they lacked the specifics I needed. That is what got me into looking creating my own service. There is some really good documentation on IFTTT but I was not able to find a Java analog.

An IFTTT service has a few requirements:

  • A trigger
  • An action
  • A status end point
  • A test end point

These APIs also require authentication via a service key which you load into your IFTTT service and include in your requests as a header. To learn more visit:

With the above in mind, I created this Jersey-powered Java analog of their Ruby sample. It is setup to be deployable on Heroku’s free dyno tier and if you use Ngrok you can even standup the service locally and have IFTTT communicate with it.

You can find the service here:

xmlrpc.php 403 Forbidden

The other day I encountered this error on one of my domains while working on a small project. The endpoint used to work, as I had used it for testing as recently as a month ago so when it stopped working I became annoyed.

There are a few reasons for this to happen and you have probably already tried them, though I will still enumerate them here:

  1. One of your plugins, in particular security plugins, is blocking access to the endpoint (you can check by looking at your .htaccess file or disabling the security plugins)
  2. The file permissions for xmlrpc.php are incorrect (they should be 644)
  3. The .htaccess file has become corrupt (you can check by renaming the current file to something like then go to dashboard of the WP instance, click on settings, writing, permalinks and click on save. This will re-create the .htaccess file. If it works, then your .htaccess file was corrupt indeed, otherwise, just delete that newly created file and revert to the original)

Obviously, none of these were the issue for me. The problem turned out to be my hosting provider was filtering on xlmrpc.php and returning a 403.

You can confirm this by doing the following:

curl -v https://your-domain/xmlrpc.php

If access to the xmlrpc.php file is being blocked by your hosting provider the response will look like this:

*   Trying 198.54...
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
*  SSL certificate verify ok.
> GET /xmlrpc.php HTTP/1.1
> Host: <your domain>
> User-Agent: curl/7.54.0
> Accept: */*
* HTTP 1.0, assume close after body
< HTTP/1.0 403 Forbidden
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<html><body><h1>403 Forbidden</h1>
Request forbidden by administrative rules.
* TLSv1.2 (IN), TLS alert, Client hello (1):
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, Client hello (1):

The giveaway is in the response headers.

< HTTP/1.0 403 Forbidden 
< Cache-Control: no-cache 
< Connection: close 
< Content-Type: text/html

If the issue is on your WP instance configuration, the response headers will include Apache as the server (like this):

< HTTP/2 405
< content-type: text/plain;charset=UTF-8
< date: Wed, 04 Apr 2018 07:37:10 GMT
< server: Apache
< x-powered-by: PHP/7.0.26
< allow: POST

I contacted my hosting provider and indeed they were filtering on xmlrpc.php. I am not recommending you use or not use xmlrpc, I am simply demonstrating the steps to troubleshoot the error.

Using Let’s Encrypt SSL certs on your site


Since the early 2010s, there has been a strong push towards security and encryption on the internet. To encourage encryption, Google will prioritize your site higher if it’s encrypted, even if your content is not as good.

In general, setting up an SSL certificate for your site is not that difficult, as long as you’re willing to let your hosting provider do that work for you and pay for their work.

For me, at 1and1, it costs me around $70 per year for multi-subdomain SSL encryption per domain. They have a cheaper, single domain certificate for $30 per year. Now you might think, neither $70 or $30 seem that high to me. And that’s true, if you only have 1 or 2 domains. But what if you have 5 domains and you want encryption for the subdomains therein? Now you are looking at $350 per year for just encryption.

This is the reason I looked into Let’s Encrypt certificates. They are free, though I strongly recommend you donate to their efforts, and while not all hosting providers make it easy for you to use the Let’s Encrypt certificates, you can pretty much use them anywhere.


Important note, these instructions are for setting up an SSL cert on a machine other than the host. Meaning, if you run your own server, either via VPS or cloud, or an actual physical host, you should follow these instructions:

However, if you are like me, and you have a shared hosting contract and cannot install certbot on the host and therefore require to get the certificates on a different machine, please follow these instructions:


  • Begin the process of getting the certificates by using –manual so the certificates are not installed locally when finished (You may need to do itΒ  as sudo as certbot will create a log in /var/log/…).
sudo certbot certonly --manual
  • Enter the appropriate email address
  • Accept the terms of service
  • Decide whether or not to share your email address
  • Enter the domains for which you want to create a certificate. You can create as many as you want, they just need to be comma-separated. For example:,,,,,
  • Enter Yes for the IP being logged
  • For each domain you entered in the step above, you will need to validate ownership. For this step certbot will ask you to create a file under
  • So if you entered 2 entries (domains or sub-domains) above, you will need to create 2 files under the location above. Below is an example:
Create a file containing just this data:

And make it available on your web server at this URL:
  • In the example above you would do the following:
SSH to the host of your application or site
Navigate to /home/<your-username>/www/.well-known/acme-challenge
echo "xqIp_322onZb-HoSQOV2WOBxVjVbj9LBUEaEQ.F13uE1z6yJ7yryfWPyI_Wt3DrKfeCTp8UOVIfE" > xqIp_KmB32Zb-HoSQOV2MBxVjVbj9LBUEaEQ
  • Do that for all of the entries.
  • If the process is successful you should get this:
Press Enter to Continue
Waiting for verification...
Cleaning up challenges

- Congratulations! Your certificate and chain have been saved at:
Your key file has been saved at:

Your cert will expire on 2018-05-10. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt:
Donating to EFF:
  • At this point the certificates have been created and ready for use. You will now need to copy them over to your host. The certificate is under the fullchain.pem file and the private key under the privkey.pem file. In my case, I had to copy and paste the contents of both files into my hosting provider SSL manager tool.
  • To view the certificate do this (note these are only examples)
sudo cat /etc/letsencrypt/live/
  • Not there are “two” certificates, you only need to copy and paste the first one. Also, make sure to copy and paste the “Begin” and “End” certificate parts (copy lines 2 – 19)
  • The same will apply to the private key under privkey.pem.

That’s it, you should now have FREE SSL encryption working on your host and you have saved enough money for a well-deserved cup of coffee.



SainSmart DDS-120 Oscilloscope on MacOS


I’m working on a project which requires me to use an oscilloscope. I have never had a need for such a device, so I was very surprised when I priced used oscilloscopes used at €300.

As a result, I looked at my options and I was happy to find there are a plethora of known as USB Oscilloscopes. These are hobby-level oscilloscopes, which are less expensive because they save money on the computing and display components of the device.

This oscilloscope in particular, is quite capable. At about $70,Β  my Sain Smart DDS-120, included the scope, two channels, two probes, a logic analyzer and an external trigger.Β  You can find the specs of the scope here.

Anyway, given I only have a Apple computers and the software for the scope runs on windows I thought I would share what I did to make it work.


At this point, it should be fair to assume you have access to the scope and you just need to get it working. There are two options for getting Windows to run on your Mac computer, you can go dual-boot (the reliable and least convenient), or Virtual Machine (less reliable, though not unreliable, and most convenient). I chose the latter. There are a bunch of Virtual Machine options, I chose VirtualBox, or VBox. I chose VBox because it’s free and it works pretty well. With that in mind, here are the steps I followed to get the scope working on MacOS.

  1. Download and install the latest version of VirtualBox. I used version 5.2.x
    1. On MacOS 10.13, I got an error message at the end of the installation saying I needed to allow the program to run in security & privacy preferences pane. After I did this, the installation said there was an error and it had not installed correctly. I simply re-installed it, and the installation completed without any issues (not even the security & privacy warning).
  2. Download and install the latest VirtualBox Oracle VM VirtualBox Extension Pack. This will be in the VirtualBox download page and cannot be installed until AFTER you have installed VirtualBox. To install just double-click on the file and it will automatically pickup VirtualBox and install itself there.
  3. Once installed, you need to create a Win7 Virtual Machine (VM). To do this:
    1. Click on “New”
    2. Click “Create”
    3. Click “Create” again and you are done
  4. At this point you have an empty VM. You will need a Win7.iso file. These are easy enough to find on the internet. If you need one, message me directly and I can point you in the right direction. Assuming you have an ISO image, select the VM and click on Start –> Normal Start. Since you need to load the OS into the VM, you will get this error message (though instead of Windows 7 Ultimate… yours will likely say “Empty”. Click on the Folder Icon next to the drop-down and select the your Win7.iso file. Click start and follow the instructions.
  5. After you have finished installing Win7 you will need to setup Scope software and calibrate the probes.
    1. Enable the scope’s USB to connect through your Mac onto the VM by following these steps:
      1. Start the VM
      2. Connect the Scope to the Mac using the USB cable
        1. Your Mac should NOT pick up the scope as it’s not compatible, so don’t worry if nothing happens on your Mac (the host)
      3. Right-click on the VM and choose settings. Under settings click on “Add Filter” and select BUUDAI USBxxx. Make sure to select it and click OK.
      4. The VM will inform you that it’s installing the necessary drives and it should just begin picking it up going forward.
    2. Install the software. The software I used is here: Software_V1.5.0. The zip file includes everything I needed to get it to work. Just look for DDS120.exe inside the folder and that’s it πŸ™‚
      1. You can check installation by clicking on “Start” on the bottom right of the software screen.
    3. Now you need to calibrate the probes.
      1. To calibrate the probes, set them to 10X and connect them to the scope and use the signal emitter between the two channels to calibrate them. 2018-01-25 18.00.44.jpg
      2. Inside the Scope software do the following:
        1. Select Channel 1 and make sure it’s On and set to 10X
        2. Set the time to 1ms
        3. Set Channel 1 Voltage to 50mV
        4. Optionally, you can zoom in to get a closer look at the waves (but this is not necessary
      3. You want the signal to be as square as possible. You can adjust it by using a small screwdriver to calibrate the wave shape. The probes are pretty good, but not perfect. So don’t worry if you cannot get the shape of the wave to be perfectly square.

That’s it you are done and ready to begin using your new USB scope.

Plex Media Server auto restart on crash (MacOS)


If you use the Plex Media Server on your home PC to serve your media content, then you know how important it is to keep that service up and running at all times. For this reason I found it frustrating when my iOS Plex Player kept crashing the server.

For some reason, the iOS Plex App would crash the server anytime I tried to play a video using automatic quality throttling. Anyway, after I figured out it was the iOS App, I began looking into way to make sure that in the future, the server would come back up, even if it had been crashed.


Setting up a process to auto-restart is simple. You just need to create a LaunchAgent and have LaunchD (MacOS’s agent and daemon controller) take care of the rest:

  1. Remove the check on Plex Media Server to Open at Login
    1. If you Plex Media Server is already running, go up to the menu bar and make sure to uncheck Open at Login. Otherwise, you could end up with duplicate processes.
  2. Create a file like this one (if you are using Plex Media, you can just use that file) and make sure it’s named com.plexapp.plexmediaserver.plist
    1. Lines 5 to 11 tell launchd to start the program when computer starts and to restart it (keepAlive) if it crashes.
    2. Lines 11 and 12 give a name to the LaunchAgent.
    3. Lines 16 to 18 are the program’s parameters. open is a built-in program in MacOS for opening files, URLs and programs. -g tells open to start the program in the background. And /Applications/Plex Media is the path to our application πŸ™‚
  3. Place the file under the LaunchAgents folder in the user’s Library (here: ~/Library/LaunchAgents/). You should end up with this path: /Users/<your username>/Library/LaunchAgents/com.plexapp.plexmediaserver.plist
  4. Load the LaunchAgent into LaunchD like this: launchctl load ~/Library/LaunchAgents/com.plexapp.plexmediaserver.plist
    1. That tells LaunchD to look at the configuration file in the plist file you have given it and execute it
    2. You can also unload it, which means to remove the plist file from LaunchD’s queue of things to control. launchctl unload ~/Library/LaunchAgents/com.plexapp.plexmediaserver.plist

That’s it. From now on, when the computer logs in, it will start Plex Media Server and if it crashes and it will automatically restart.


If you run into issues, you can troubleshoot by looking at the system logs and loading or unloading the plist file.

The system.log records any error messages generated by LaunchAgents (not just the one you just created). You can view the system log like this:

  • To just view what is there now: cat /var/log/system.log
  • To scroll through what is there now: cat /var/log/system.log | more (and press the spacebar to progress)
  • To view the most recent changes in real-time: tail -f /var/log/system.log
    • I find this to be the most helpful

With access to the system log, you can now test your plist by

  1. Unloading the plist
  2. Edit the plist
  3. Re-Load the plist
  4. Check the system log


MacOS installer language setting

The other day I purchased an MBP13 from eBay. It was a great deal and it came from Italy. I should probably elaborate this point as it was kind of unique. I live in Spain, but I’m originally from the US, as such, I prefer a US keyboard (the Spanish, or even the UK English keyboard layouts are different, trust me). Anyway, on eBay I found an MBP13 with a US keyboard layout, that was originally purchased in Japan, but was being sold by a person in Rome, Italy, coming to an American living in Barcelona, Spain… funny no?

Anyway, I got the laptop and it was in very good shape. However, like all used pieces of software, I needed to reset it. However, the installer’s language (not the OS) was in Italian.

I looked online for changing the language of MacOS and I was always directed to the System Preferences –> Language and Region –> Set to English… change, but that is AFTER you have installed the OS.

So I gave my bad Italian a try and I ended up with a bad disk partition and a bad install.

2017-12-15 14.04.02.jpg

I then figured out the setting. To update the language on your installer (or BIOS, as some people referred to it), do this:

  1. Restart the machine by holding down the power button until it shuts off (around 5-10 seconds)
  2. Press the power button again
  3. Immediately after pressing the power button hold down command and r (command + r), just keep it holding down
  4. That will start the recovery cycle and it will try to connect to the internet. Wait for that to finish
  5. You’ll arrive at the Installer.
  6. Regardless of the language just remember to click on the second menu (not counting the Apple logo), the first option will be Change Language. Click and you are set πŸ™‚ 2017-12-18 19.44.37.jpg

Do you need a US phone number even while not in the US?

If you travel a lot and/or live outside of the US and require the appearance of being in the US, then Google Voice is for you.

Google voice is a VoIP service which works very similar to normal VoIP services plus you get a pretty reliable texting service too.

You can find more information on Google voice here. Below is a diagram to explain how the system works and how you can use to appear as if you are calling from a US number even while traveling.

The integration of the Google Voice App is very good in Android phones. On iPhones not so. On iPhones, your integration will be more like when you are traveling, where you need to use both apps (the Google Hangouts Dialer and the Google Voice App for individual functions).

There are a bunch of really good articles on how to use Google Voice and the Google voice dialer out there, so I will not plagiarize them. Instead here are some useful links.