Categories
Web Dev

Google PageSpeed Insights, Optimizations improve but FCP and DCL Page Speeds do not change

Changes you make today to improve your Google PageSpeed insights report will take 30 days to reflect in the FCP and DCL scores. Even though Optimization scores improve immediately following changes to your website, the FCP and DCL are calculated using crowdsourced data over a 30 day period from the Google Chrome User Experience Report (crUX).  This data is calculated in a rolling 30 days. For example, if your FCP speed after you made site changes is averaging 1.1 seconds, but before you made the changes it was averaging 11.1 seconds, the score will average the times together until 30 days have passed. Lets say 15 days from now you check your website, the FCP score will be about 5.1 seconds. Once 30 days have passed your FCP average will be 100% reflective of the changes you made 30 days before.

How long should I wait for the FCP and DCL scores to update?

Wait 30 days to re-assess your FCP and DCL score speeds in the Google PageSpeed Insights report.

Where do I go to test my webpages with Google PageSpeed?

Link to test your website on Google PageSpeed Insights: https://developers.google.com/speed/pagespeed/insights/

How can I contribute my browser usage to the Google User Experience Report?

You can contribute your web usage to the Google User Experience Report by enabling the “Automatically send usage statistics and crash reports to Google” option in Google Chrome. To enable, go to “Settings”, then click “Advanced”. Under “Privacy and security” section, toggle the option “Automatically send usage statistics and crash reports to Google” on, then restart your browser. Note, the statistics do not include any personal information but crash reports sent to Google User Experience Report at the time of a crash may contain web page URLs or personal information that was in the browser at the time of the crash.

 

 

 

Categories
Apache LAMP Lighttpd Linux News Programming Web Dev

Lets Encrypt on Ubuntu using Apache, Nginx, or Lighttpd Cheat Sheet

If you are using Lets Encrypt (www.letsencrypt.org) certificates on your Ubuntu servers, you may find the following information useful if you work with Apache, Nginx, or Lighttpd.

Installing Lets Encrypt on Ubuntu 14.04 (or older)

Reference: https://www.vultr.com/docs/setup-lets-encrypt-with-apache-on-ubuntu-14-04

apt-get install git
git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
/opt/letsencrypt/letsencrypt-auto

The 3rd line sets up Lets Encrypt and installs any necessary dependencies such as Python.

Ubuntu 16.04 install instructions

Reference: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04

apt-get install letsencrypt

Note: The remaining portion of this document uses /opt/letsencrypt/letsencrypt-auto and /opt/letsencrypt/certbot-auto command line tools as found when installing on Ubuntu 14.04 or older. If you are using Ubuntu 16.04 or newer, simply run the command letsencrypt and certbot without the full path or the additional -auto from the command line.

Setup your server so you can create certificates without having to stop your web server

I will not explain aliases in detail, but essentially you need to create an alias URI for /.well-known/. It can be shared among all of your virtual hosts. Lets Encrypt uses this folder to save folders and files that are used in the confirmation process for creating new and renewing existing certificates.

Create a working folder for Lets Encrypt:

mkdir -p /var/www/letsencrypt/.well-known/

Then setup your web server to use this working folder for the .well-known URI path on your server.

Apache .well-known Example

Create a file called letsencrypt.conf with the following.

Alias "/.well-known/" "/var/www/letsencrypt/.well-known/"
<Directory "/var/www/letsencrypt/.well-known">
 AllowOverride None
 Options None
 Require all granted
</Directory>

If you place this file in the conf-enabled folder (/etc/apache2/conf-enabled/letsencrypt.conf) then simply restart your Apache web server. Otherwise you will need to make a symbolic link in your conf-enabled folder to where you saved your letsencrypt.conf file.

Do not forget when ever making configuration changes to Apache to run the following before restarting your web server.

apache2ctl configtest

Nginx .well-known Example

Create a file called letsencrypt.conf with the following.

location ~ ^/\.well-known/(.*)$ {
 alias /var/www/letsencrypt/.well-known/$1;
 # No need to log these requests
 access_log off;
 add_header "X-Zone" "letsencrypt";
}

Then in your nginx.conf file near the top of the server { } add the following line:

 include /path/to/your/letsencrypt.conf;

Do not forget when ever making configuration changes to Nginx to run the following before restarting your web server.

nginx -t

Lighttpd .well-known Example

Add the following in your lighttpd.conf file. Note the += is for adding to an existing set of alias URLs. If you have no alias.url values, then simply remove the + but leave the equal. Learn more about Lighttpd aliasing here.

alias.url += ( "/.well-known/" => "/var/www/letsencrypt/.well-known/" )

Do not forget when ever making configuration changes to Lighttpd to run the following before restarting your web server.

lighttpd -t -f /etc/lighttpd/lighttpd.conf

Creating New Lets Encrypt SSL Certificates

You can now create Lets Encrypt certificates without your server having to be shut down temporarily.

/opt/letsencrypt/letsencrypt-auto certonly --webroot --manual-public-ip-logging-ok -d example.com --agree-tos -m you@example.com --text  -w /var/www/letsencrypt/

Replace example.com and you@example.com with your email address and your host name. Remember if your host name starts with www., leave off the www. as it is not necessary, a certificate without the www. also works with the www.

Renew certs

/opt/letsencrypt/certbot-auto renew

certbot-auto uses previous settings to renew the cert in the exact same way it was created so no extra parameters are necessary

Reference: http://letsencrypt.readthedocs.io/en/latest/using.html#renewing-certificates

You can create a file in the /etc/cron.weekly/ folder to renew Lets Encrypt certificates weekly. Even though it will run weekly, Lets Encrypt is smart enough not to renew certificates until there is 30 days or less remaining. This gives you plenty of overlap in case for some reason one week failed to renew.

Example bash file /etc/cron.weekly/letsencrypt

#!/bin/bash
/opt/letsencrypt/certbot-auto renew
You may want to use the > /dev/null 2>&1 trick at the end of the command line to surpress errors from coming from your cron tasks via email.

Deleting SSL Certificates

When we no longer wish to maintain SSL for a host name, we need to delete the renewal config file.
rm /etc/letsencrypt/renewal/example.com.conf
This file includes information where the SSL certs are located and the options used when the SSL cert was first created.
This is not the same as revoking an SSL certificate. This simply no longer renewing the certificate every 2-3 months.
SSL Cert files are saved in the following path by folder for each host
/etc/letsencrypt/live/
Specific SSL files are located within the host name folder
/etc/letsencrypt/live/example.com/
Important reference to the pem files:
cert = /etc/letsencrypt/live/geeknewscentral.com/cert.pem
privkey = /etc/letsencrypt/live/geeknewscentral.com/privkey.pem
chain = /etc/letsencrypt/live/geeknewscentral.com/chain.pem
fullchain = /etc/letsencrypt/live/geeknewscentral.com/fullchain.pem

Note: “chain” is specifically for Apache and the SSLCertificateChainFile  setting, which is now obsolete as of 2.4.8. This is a good thing as now Nginx and Apache use the same fullchain and privkey files. Lighttpd is still not as simple, see note below.

Though all files are saved in the pem format, other systems and platforms use different file extensions rather than filenames to identify the differnet files. Here is a quick cheatsheet in case you need to map previous files to new files.

type (explanation) - letsencrypt - other examples
cert (public key) - cert.pem - example.com.crt, example.com.public.key
privkey (private key) - privkey.pem - example.com.key, example.com.private.key
chain - (chain key) chain.pem - gd_bundle.crt, alphasslroot.crt, etc...
fullchain (concatenation of cert + chain) - fullchain.pem - fullchain.crt

Pem files and their use on Apache, Nginx, and Lighttpd

Apache 2.4.8 and newer

SSLCertificateFile /etc/letsencrypt/live/example.com/privkey.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/fullchain.pem

Note that there is no SSLCertificateChainFile option, it now uses the fullchain.pem which combines the cert.pem with chain.pem.

Apache 2.4.7 and older

SSLCertificateFile /etc/letsencrypt/live/example.com/privkey.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/cert.pem
SSLCertificateChainFile /etc/letsencrypt/live/example.com/chain.pem

Note that we are not using the fullchain.pem, instead we are linking to the cert.pem and chain.pem on 2 separate configuration lines.

Nginx

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

Lighttpd

Lighttpd Note: The cert and privkey need to be combined for Lighttpd

cd /etc/letsencrypt/live/example.com/
cat privkey.pem cert.pem > ssl.pem

Then link to the certificates in your lighttpd config settings.

ssl.pemfile = /etc/letsencrypt/live/example.com/ssl.pem
ssl.ca-file = /etc/letsencrypt/live/example.com/chain.pem

If you are automating Lighttpd renewals, you will need to add an extra step that concatenates the privkey.pem with the cert.pem before restarting/reloading Lighttpd

While searching the Internet for examples of setting up Lighttpd, I found some examples show using the ssl.ca-file using the fullchain.pem. Though this will also work, that is not technically correct as the ssl.pem already houses the cert.pem
Please feel free to leave a comment if you find an error and/or have additional notes which may be helpful for others.
Categories
WordPress

WordPress: Could not save password reset key to database

If you have the error “Could not save password reset key to database.”, more than likely you have one of the following issues:

  • Database server is full (the values cannot be saved to the database due to the file system being 100% full)
  • Database server is read-only (If you use a service such as Amazon Web Service’s RDS your website may be connecting to the read replica instead of the write server)
  • Database cannot make changes to database tables (check the database files can be written to by the user the database service is running as)
  • Temporary folder on your server is not writable (this is the least likely scenario)

I found this problem is not documented well so I blogged my research. Hopefully this helps others searching for a solution.

Categories
News

True meaning of meta robots content equals = noodp

I see a lot of misunderstandings of the “noodp” found in meta tags with name “robots”.

<meta name=”robots” content=”noodp” />

Not all content values for a meta robots HTML tag are bad. Most robots content values do not block search engines from indexing pages. The noodp is one of those examples.

The equivalent to the above meta tag would be…

<meta name="robots" content="index, follow, noodp"/>

It is implied that by not stating “noindex, nofollow” that the page in question is to be indexed and followed.

What does noodp mean in a robots meta tag?

You are telling search engines to NEVER use the description for your webpage from the Open Directory Project, www.dmoz.org.

When you do not have “noodp” set it is up to the search engine to decide to use your meta description, snippets from your page, or the description from the Open Directory Project.

If your webpage is not listed in the Open Directory Project, then this tag does not matter.

If your webpage is listed on the Open Directory Project, including this tag guarantees that search engines will not use the directory’s description over your meta description or content from your webpage.

More than likely the search engine will use your meta description or snippets from your page over the Open Directory Project’s description, but search engines in the past and more than likely will into the future arbitrary decide which description is better and use it.

By including the “noodp” value for your meta robots tag, you are guaranteeing that the description the search engine uses will more than likely (but not guaranteed) come from your meta description tag or from content from within the page itself.

More Details on noodp and the Open Directory Project

Please continue reading if the above triggered more questions.

What is the Open Directory Project

The Open Directory Project is a website that manages a directory of websites open to the public. Anyone can submit a website to the open directory and anyone can use the open directory, including individuals, businesses and search engines. Directory volunteers maintain the directory.

Why you may not want Open Directory Project website descriptions

You may not have wrote the description! It is possible that an editor wrote a description for your webpage and that description may not be correct, flattering, or have the message you are trying to say for your webpage.

Who uses the Open Directory Project webpage descriptions?

Search engines like Google and Microsoft’s Bing can! If your page is in the Open Directory Project’s database, search engines like Google’s may use the description from the directory rather than yours if it thinks it is a better description for the search at hand.

Don’t believe me? Take a look at the post Review your page titles and snippets from Google on the subject, they clearly state they can use the description from the Open Directory Project.

Why does Google use descriptions from the Open Directory Project?

The descriptions are written by a 3rd party to describe the page in question. This is useful for a search engine if it is looking to provide a description to the search user that is the most relevant.

I would personally call the Open Directory Project descriptions a good alternative to guessing a page description. Maybe the description is better than the one on your website, or at least the search algorithm thinks that. What ever the reason, maybe its a good thing but for those of us who spend a lot of time writing our descriptions, in general this is not desirable.

Is this a widespread problem?

Not really, for the most part it’s an exception. For a blogger, more than likely this may only be an issue once or twice for older blog posts that were submitted to the Open Directory Project over the years. Static pages and homepages however are more susceptible to being listed on the Open Directory Project, thus opening the possibility of these descriptions being used in search results rather than your descriptions.

Why doesn’t search engines always use my meta descriptions?

Good question, I do not have a good answer for that one. Moz.com has a good write up on why Google will not use meta descriptions as well as Yoast’s details on My meta descriptions aren’t showing up in the search result pages which may be helpful.

My theory though is it comes to what’s best for the search. If the search someone made is very specific, perhaps a snippet from the meat of my page is better as a description than my page’s description. I will leave it up to Google to decide that.

Why do I always use noodp robots tag

Insurance! This eliminates the possibility of a description from the Open Directory Project being used as my description in search results. Referring to Google’s post linked above, this means that Google will now either use my meta description or create rich snippets based on markup in them page itself.

Categories
News Podcasting

Podcast Movement 2016 – Hosting Session on Podcasting with WordPress and IAB Metrics Panel

I will be hosting a Question and Answer session at Podcast Movement 2016 on Podcasting with WordPress. If you are attending Podcast Movement this July and have questions about podcasting with WordPress, please come to the Solutions Stage room on Friday, July 10th, from 2:30-3:15pm.

I will be part of a panel discussion on IAB podcast metrics. I am a member of the IAB subcommittee tasked at defining technical guidelines for podcast measurement representing the Blubrry Podcast Community and parent company RawVoice. The panel discussion will take place on Friday, July 10th from 10:30-11:15am.

Categories
News

Featured Image Test

this is a featured image test.

Categories
PHP Programming Technology WordPress

GetID3 analyze() function new file size parameter

You can now read ID3 (media file headers) information from mp3 and other media files using GetID3 library without having the entire media file present. The new 2nd parameter to the analyze() member function allows you to detect play time duration with only a small portion of the file present.

Years ago I added this code to the versions of the getid3 library we packaged with the Blubrry PowerPress podcasting plugin. I’ve submitted this code to the getid3 project so everyone can benefit. As of GetID3 v1.9.10,  you can now pass a second optional parameter specifying the total file size. This parameter sets the file size value in the getid3 object, skipping the need for the library call the filesize() function.

This is the secret sauce that allows PowerPress to detect the file size and duration information from a media URL of any size in only a few seconds.

Requirements

The new parameter only works if the following are true:

  • Have enough of the beginning of the media file that includes all of the ID3 header information. For a typical mp3 the first 1MB should suffice, though if there is a large image within your ID3 tags then you may need more than 1MB.
  • Have the total file size in bytes.
  • The mp3 file is using a constant bit rate. This must be true for podcasting, and highly recommended if the media is to be played within web browsers. Please read this page for details regarding VBR and podcasting.

Example Usage

// First 1MB of episode-1.mp3 that is 32,540,576 bytes
// (approximately 32MB)
$media_first1mb = '/tmp/episode-1-partial.mp3
$media_file_size = 32540576;
$getID3 = new getID3;
$FileInfo = $getID3->analyze( $media_first1mb, $media_file_size );

You can use a HTTP/1.1 byte range request to download the first 1MB of a media file, as well as a HTTP HEAD request to get the complete file length (file byte size).

Byte range requests and HEAD requests are safe to use for podcasting. If a service does not allow HEAD requests or accepts byte range requests, then they will have bigger issues to deal with as these features are required by iTunes.

Blubrry PowerPress podcasting plugin has been using this logic to detect mp3 (audio/mpeg), m4a (audio/x-m4a), mp4 (video/mp4), m4v (video/x-m4v), oga (audio/ogg) media since 2008.

Not all media formats support this option. You should test any format not mentioned above. For example, ogg Vorbis audio works, ogg Speex audio does not.

Categories
Family General

Rest in peace TY!

Ty passed away today. His kidneys failed, it all happened rather quickly.

Ty Ty

Ty

TyTyim001924-medium.jpg im001922-medium.jpg Wiring Expert Ty Oatmeak Cookies 1 Ty wearing PodCamp Ohio Bling

We miss you buddy!

Categories
News

Where to find your user contributed images you’ve submitted to Amazon.com over the years

200px-Amazon.com-Logo.svgIf you’re not aware, last Summer (August 15th I believe) Amazon.com removed the “User Contributed Images” feature. If you’re like me and uploaded additional product images and wanted to find them for reference, you’re going to be searching for a very long time.

To find your uploaded images, go to amazon.com, sign-in, then navigate to your profile. (or try this link: https://www.amazon.com/gp/pdp/profile/) Once you are viewing your profile, click the “images” tab just below the “Contributions” heading. When viewing with Google Chrome the larger size images do not load. Firefox seems to not have an issue.

What is available:

  • Tiny thumbnail image
  • Title / Caption
  • In-image notes

I think this is a real bummer as many of my product reviews reference the images. I’m now in the process of writing blog posts of these product reviews with the images.

Categories
News Transportation

Project Trans Am – Month 35, Interior Coming Together

I’ve moved the monthly updates on Project Trans Am to my Mods and Rods.tv blog and podcast.

My latest post covering everything I’ve done last August with photos is available here: http://www.modsandrods.tv/2013/04/04/project-trans-am-for-march-2013-insulation-completed-focusing-on-the-interior-and-wiring/

Outline of Accomplishments LAST MONTH

  • Insulation Completed
  • Carpet Installed
  • Kick Panels, 1/4 Panels and Sill Plates installed
  • Oil Pressure and Water Temperature Lines Installed
  • Added a 4 Blade Fuse Block in Glove Box
  • T-top Headliner Cut and Glued

IMG_20130324_182622 IMG_20130310_200404 IMG_20130323_180652 IMG_20130323_180748

Car is finally coming together! I should have the interior back together this April. If I can stay on schedule hopefully Fathers day weekend the Trans Am will be back on the road!