Intuit Payments API and QuickBooks Merchant Services API 2018 update which will most likely cost Intuit and its customers millions

Intuit Merchant Services has made changes to their API that will most likely impact thousands if not tens of thousands of customers starting February 1st. Where most companies such as PayPal allow deprecated APIs to continue to work to maximize translation sales, Intuit no longer takes a backward compatibility approach to their API which could lead to the loss of millions of dollars for Intuit customers that rely on the stability of their API.

Starting February 1st, the API now requires 2 additional fields that indicate a true/false that the application making the charge is doing so for “eCommerce” as well as for “Mobile Devices”. Searching online showed no evidence that the new fields are required, and no other competitors require such fields. We can only assume the new fields are now required for someone mid-level at Intuit, perhaps to have questions answered. Currently APIs such as BrainTree’s (PayPal’s latest) does not have requirements to specify if the transaction is eCommerce or comes from a mobile device. We can assume the decision for this information to be mandatory was an internal one. Apparently the information is extremely important and is worth the loss in transactions by customers who do not implement the new fields.

Applications that do not specify a true/false value for the eCommerce or isMobile tags in the API calls will not be able to charge or refund customers. Intuit’s API is NOT backwards compatible,which means you must update your API in your application or after February 1st you will no longer be able to process credit cards.

Failure to effectively notify users of the severity of the problem

Intuit has not communicated the changes in a meaningful way. All notifications up to this point have been poorly titled or were secondary to the original messages being sent. Furthermore, all communication up to this point poorly highlighted the importance of the changes.

Intuit sent the first 3 email notifications of the API change via their “Intuit Developer News” newsletter. The first was sent on September 28, 2017. In the newsletter burred in the bottom in a section titled “timeline reminders”, developers could find the last time in the newsletter “February 1, 2018: Updates to the Payments API for QuickBooks Online are live.” which links to the following announcement made August 1, 2017: Additional notices were sent in the same way as the first, in the Intuit Developer News newsletters on October 27th and November 30th.

On January 10th, a new email was sent via mass-mailing titled “Updates to the Payments APIs for QuickBooks Online may affect your application”. The first real email to emphasize the problem to its clients, the title can still be easily ignored as many customers will assume that it does not apply to them. The email links to the following page which tries to explain the severity of the changes:  It contains some links, the first to the announcement page posted on August 1, 2017, the second to Payments API Reference ( and a third to QBMS ( The Payments API Reference page, and navigating to immediate pages sadly shows no explanation for the new  eCommerce or Mobile fields that are now required. The 3rd to the QBMS documentation shows an example XML with the 2 new tags but no where on the page does it explain the 2 new fields or that the order in which they are placed in the XML is important.

On January 12, 2018 they put a news announcement on the subject on their blog. For those customers of Intuit who just happen to visit the QuickBooks API news blog between now and February 1st will see the announcement “Updates to the Payments APIs for QuickBooks Online may affect your application”.

Poorly written documentation

There are two places where the changes are made, with the Payments API and with the QBMS (QuickBooks Merchant Services) API. Both are hard to navigate and do not have a way to find out information about fields in various requests. I could not find any details about “ecommerce and “mobile” under Payments API, and I could not find anything for IsMobile and IsECommerce within the QBMS API.

Backwards compatible claim

If you are not aware, XML has the ability to allow for backward compatibility. What this means is that a tag in XML can set a version number that you are following of an API, that way the service you are communicating with knows how to treat your parameters. If you build for version 1.0 and your API calls always reference that you are using 1.0, then the server will treat them as 1.0.

Intuit years ago recognized this and documented as such in their documentation, found on this page:

Quickbooks XML API version 4.1 was set many years ago. It appears in this case, these 2 fields that are now required would trigger a new version number of 4.2, but upon scanning the API documentation I see no mention of the version number changing. This change goes against their documentation which stats taht changes would have a new version. This also means the claim in their documentation that the API is backwards compatible is not true.

This latest API change suggest the QuickBooks XML API is not backwards compatible and the version numbering in the XML most likely means nothing on the Intuit server side of the request.

Documentation is poor

It is not clear in the documentation which requests now need these 2 new fields or that the location of the fields in the QBMS XML is critical. Upon our initial testing, we simply added the XML at the end of the API calls and did them for all API calls. Doing so quickly made some calls fail. After experimenting we discovered they are only needed for API calls that also include an <Amount> tag. Also just as if not more importantly, they need to be before the <Amount> tag to work. No where can you find this information in the documentation. In one announcement blog post they had the capitalization (camel case for those developers who know what that is) wrong in their example (source), which means they more than likely did not test the changes themselves.

The documentation is sloppy, does not explain at all the fields that are required and what they mean let alone note which are optional and which are required.

They should know which customers are affected and contact them ASAP

The beautiful thing about client-server communication is that such communication can be monitored. It would be easy, if not already being done, to monitor and log every request to see which do not have the fields in question. It would then be easy for Intuit to send a report to each customer that says “X percent of your applications are not including the IsMobile tag, Y percent of your applications are not including the IsECommerce tag”. followed by an explanation why the fields are now required and the date the changes must be in place so that you do not loose sales. Intuit has this data, and they could take things a step further and have staff call their customers who are going to be affected to let them know about the API change. Unfortunately, they should be doing this 6 months ahead of the change, to give development environments enough time to make and test such changes without disrupting other development tasks already underway. That is not the case, instead they poorly announced the changes and only January 10th, 2018 have they sent an email specific to the issue to its customers, and even then it is apparent Intuit is not looking at the API calls to see how many of their customers will be affected by the change.


Intuit is making a grave mistake by disregarding their own promise to provide a backwards compatible API. This will cost them in transactions and more importantly, cost their customers sales as they scramble to deal with the change.   I guarantee that within a few days Intuit will feel the impact with a lower than usual number of transactions since the 1st of February. The lower transactions will have to be justified by the required changes and we may find out within a few weeks that the changes did not justify the loss of transactions / sales.

I personally will not recommend using Intuit for credit card processing in the future, simple as that. This is incompetence.

Lets Encrypt on Ubuntu using Apache, Nginx, or Lighttpd Cheat Sheet

If you are using Lets Encrypt ( certificates on your Ubuntu servers, you may find the following information useful if you work with Apache, Nginx, or Lighttpd.

Installing Lets Encrypt on Ubuntu 14.04 (or older)


apt-get install git
git clone /opt/letsencrypt

The 3rd line sets up Lets Encrypt and installs any necessary dependencies such as Python.

Ubuntu 16.04 install instructions


apt-get install letsencrypt

Note: The remaining portion of this document uses /opt/letsencrypt/letsencrypt-auto and /opt/letsencrypt/certbot-auto command line tools as found when installing on Ubuntu 14.04 or older. If you are using Ubuntu 16.04 or newer, simply run the command letsencrypt and certbot without the full path or the additional -auto from the command line.

Setup your server so you can create certificates without having to stop your web server

I will not explain aliases in detail, but essentially you need to create an alias URI for /.well-known/. It can be shared among all of your virtual hosts. Lets Encrypt uses this folder to save folders and files that are used in the confirmation process for creating new and renewing existing certificates.

Create a working folder for Lets Encrypt:

mkdir -p /var/www/letsencrypt/.well-known/

Then setup your web server to use this working folder for the .well-known URI path on your server.

Apache .well-known Example

Create a file called letsencrypt.conf with the following.

Alias "/.well-known/" "/var/www/letsencrypt/.well-known/"
<Directory "/var/www/letsencrypt/.well-known">
 AllowOverride None
 Options None
 Require all granted

If you place this file in the conf-enabled folder (/etc/apache2/conf-enabled/letsencrypt.conf) then simply restart your Apache web server. Otherwise you will need to make a symbolic link in your conf-enabled folder to where you saved your letsencrypt.conf file.

Do not forget when ever making configuration changes to Apache to run the following before restarting your web server.

apache2ctl configtest

Nginx .well-known Example

Create a file called letsencrypt.conf with the following.

location ~ ^/\.well-known/(.*)$ {
 alias /var/www/letsencrypt/.well-known/$1;
 # No need to log these requests
 access_log off;
 add_header "X-Zone" "letsencrypt";

Then in your nginx.conf file near the top of the server { } add the following line:

 include /path/to/your/letsencrypt.conf;

Do not forget when ever making configuration changes to Nginx to run the following before restarting your web server.

nginx -t

Lighttpd .well-known Example

Add the following in your lighttpd.conf file. Note the += is for adding to an existing set of alias URLs. If you have no alias.url values, then simply remove the + but leave the equal. Learn more about Lighttpd aliasing here.

alias.url += ( "/.well-known/" => "/var/www/letsencrypt/.well-known/" )

Do not forget when ever making configuration changes to Lighttpd to run the following before restarting your web server.

lighttpd -t -f /etc/lighttpd/lighttpd.conf

Creating New Lets Encrypt SSL Certificates

You can now create Lets Encrypt certificates without your server having to be shut down temporarily.

/opt/letsencrypt/letsencrypt-auto certonly --webroot --manual-public-ip-logging-ok -d --agree-tos -m --text  -w /var/www/letsencrypt/

Replace and with your email address and your host name. Remember if your host name starts with www., leave off the www. as it is not necessary, a certificate without the www. also works with the www.

Renew certs

/opt/letsencrypt/certbot-auto renew

certbot-auto uses previous settings to renew the cert in the exact same way it was created so no extra parameters are necessary


You can create a file in the /etc/cron.weekly/ folder to renew Lets Encrypt certificates weekly. Even though it will run weekly, Lets Encrypt is smart enough not to renew certificates until there is 30 days or less remaining. This gives you plenty of overlap in case for some reason one week failed to renew.

Example bash file /etc/cron.weekly/letsencrypt

/opt/letsencrypt/certbot-auto renew
You may want to use the > /dev/null 2>&1 trick at the end of the command line to surpress errors from coming from your cron tasks via email.

Deleting SSL Certificates

When we no longer wish to maintain SSL for a host name, we need to delete the renewal config file.
rm /etc/letsencrypt/renewal/
This file includes information where the SSL certs are located and the options used when the SSL cert was first created.
This is not the same as revoking an SSL certificate. This simply no longer renewing the certificate every 2-3 months.
SSL Cert files are saved in the following path by folder for each host
Specific SSL files are located within the host name folder
Important reference to the pem files:
cert = /etc/letsencrypt/live/
privkey = /etc/letsencrypt/live/
chain = /etc/letsencrypt/live/
fullchain = /etc/letsencrypt/live/

Note: “chain” is specifically for Apache and the SSLCertificateChainFile  setting, which is now obsolete as of 2.4.8. This is a good thing as now Nginx and Apache use the same fullchain and privkey files. Lighttpd is still not as simple, see note below.

Though all files are saved in the pem format, other systems and platforms use different file extensions rather than filenames to identify the differnet files. Here is a quick cheatsheet in case you need to map previous files to new files.

type (explanation) - letsencrypt - other examples
cert (public key) - cert.pem -,
privkey (private key) - privkey.pem -,
chain - (chain key) chain.pem - gd_bundle.crt, alphasslroot.crt, etc...
fullchain (concatenation of cert + chain) - fullchain.pem - fullchain.crt

Pem files and their use on Apache, Nginx, and Lighttpd

Apache 2.4.8 and newer

SSLCertificateFile /etc/letsencrypt/live/
SSLCertificateKeyFile /etc/letsencrypt/live/

Note that there is no SSLCertificateChainFile option, it now uses the fullchain.pem which combines the cert.pem with chain.pem.

Apache 2.4.7 and older

SSLCertificateFile /etc/letsencrypt/live/
SSLCertificateKeyFile /etc/letsencrypt/live/
SSLCertificateChainFile /etc/letsencrypt/live/

Note that we are not using the fullchain.pem, instead we are linking to the cert.pem and chain.pem on 2 separate configuration lines.


ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;


Lighttpd Note: The cert and privkey need to be combined for Lighttpd

cd /etc/letsencrypt/live/
cat privkey.pem cert.pem > ssl.pem

Then link to the certificates in your lighttpd config settings.

ssl.pemfile = /etc/letsencrypt/live/ = /etc/letsencrypt/live/

If you are automating Lighttpd renewals, you will need to add an extra step that concatenates the privkey.pem with the cert.pem before restarting/reloading Lighttpd

While searching the Internet for examples of setting up Lighttpd, I found some examples show using the using the fullchain.pem. Though this will also work, that is not technically correct as the ssl.pem already houses the cert.pem
Please feel free to leave a comment if you find an error and/or have additional notes which may be helpful for others.

WordPress: Could not save password reset key to database

wordpress-logo-500If you have the error “Could not save password reset key to database.”, more than likely you have one of the following issues:

  • Database server is full (the values cannot be saved to the database due to the file system being 100% full)
  • Database server is read-only (If you use a service such as Amazon Web Service’s RDS your website may be connecting to the read replica instead of the write server)
  • Database cannot make changes to database tables (check the database files can be written to by the user the database service is running as)
  • Temporary folder on your server is not writable (this is the least likely scenario)

I found this problem is not documented well so I blogged my research. Hopefully this helps others searching for a solution.

True meaning of meta robots content equals = noodp

I see a lot of misunderstandings of the “noodp” found in meta tags with name “robots”.

<meta name=”robots” content=”noodp” />

Not all content values for a meta robots HTML tag are bad. Most robots content values do not block search engines from indexing pages. The noodp is one of those examples.

The equivalent to the above meta tag would be…

<meta name="robots" content="index, follow, noodp"/>

It is implied that by not stating “noindex, nofollow” that the page in question is to be indexed and followed.

What does noodp mean in a robots meta tag?

You are telling search engines to NEVER use the description for your webpage from the Open Directory Project,

When you do not have “noodp” set it is up to the search engine to decide to use your meta description, snippets from your page, or the description from the Open Directory Project.

If your webpage is not listed in the Open Directory Project, then this tag does not matter.

If your webpage is listed on the Open Directory Project, including this tag guarantees that search engines will not use the directory’s description over your meta description or content from your webpage.

More than likely the search engine will use your meta description or snippets from your page over the Open Directory Project’s description, but search engines in the past and more than likely will into the future arbitrary decide which description is better and use it.

By including the “noodp” value for your meta robots tag, you are guaranteeing that the description the search engine uses will more than likely (but not guaranteed) come from your meta description tag or from content from within the page itself.

More Details on noodp and the Open Directory Project

Please continue reading if the above triggered more questions.

What is the Open Directory Project

The Open Directory Project is a website that manages a directory of websites open to the public. Anyone can submit a website to the open directory and anyone can use the open directory, including individuals, businesses and search engines. Directory volunteers maintain the directory.

Why you may not want Open Directory Project website descriptions

You may not have wrote the description! It is possible that an editor wrote a description for your webpage and that description may not be correct, flattering, or have the message you are trying to say for your webpage.

Who uses the Open Directory Project webpage descriptions?

Search engines like Google and Microsoft’s Bing can! If your page is in the Open Directory Project’s database, search engines like Google’s may use the description from the directory rather than yours if it thinks it is a better description for the search at hand.

Don’t believe me? Take a look at the post Review your page titles and snippets from Google on the subject, they clearly state they can use the description from the Open Directory Project.

Why does Google use descriptions from the Open Directory Project?

The descriptions are written by a 3rd party to describe the page in question. This is useful for a search engine if it is looking to provide a description to the search user that is the most relevant.

I would personally call the Open Directory Project descriptions a good alternative to guessing a page description. Maybe the description is better than the one on your website, or at least the search algorithm thinks that. What ever the reason, maybe its a good thing but for those of us who spend a lot of time writing our descriptions, in general this is not desirable.

Is this a widespread problem?

Not really, for the most part it’s an exception. For a blogger, more than likely this may only be an issue once or twice for older blog posts that were submitted to the Open Directory Project over the years. Static pages and homepages however are more susceptible to being listed on the Open Directory Project, thus opening the possibility of these descriptions being used in search results rather than your descriptions.

Why doesn’t search engines always use my meta descriptions?

Good question, I do not have a good answer for that one. has a good write up on why Google will not use meta descriptions as well as Yoast’s details on My meta descriptions aren’t showing up in the search result pages which may be helpful.

My theory though is it comes to what’s best for the search. If the search someone made is very specific, perhaps a snippet from the meat of my page is better as a description than my page’s description. I will leave it up to Google to decide that.

Why do I always use noodp robots tag

Insurance! This eliminates the possibility of a description from the Open Directory Project being used as my description in search results. Referring to Google’s post linked above, this means that Google will now either use my meta description or create rich snippets based on markup in them page itself.

Podcast Movement 2016 – Hosting Session on Podcasting with WordPress and IAB Metrics Panel

I will be hosting a Question and Answer session at Podcast Movement 2016 on Podcasting with WordPress. If you are attending Podcast Movement this July and have questions about podcasting with WordPress, please come to the Solutions Stage room on Friday, July 10th, from 2:30-3:15pm.

I will be part of a panel discussion on IAB podcast metrics. I am a member of the IAB subcommittee tasked at defining technical guidelines for podcast measurement representing the Blubrry Podcast Community and parent company RawVoice. The panel discussion will take place on Friday, July 10th from 10:30-11:15am.

WordPress Translate System Feedback

I tried to reply with my latest feedback on handling of Translations but sadly my comment was not approved. I’ve tried to have conversations with developers on WordPress’s slack system but instead of considering the problems I raised I quickly got responsive answers.

Here’s my reply so hopefully someone will read it who develops for and will possibly consider some of the ideas I have for fixing the complicated mess that exists today with plugin (and theme) translations on

A reply to thread Hello Polyglots, First I would like to say

First I want to make this clear, though I am very critical (I am by nature, i am a developer), I support the translation efforts. I know the reaction when someone criticizes the way things are done is perceived as anti WordPress, that is not the case with me. I will call out BS, but again I’m a developer it is in my DNA. I can tell you as a project supervisor, the way this is setup currently will never scale to the 40,000+ plugins, let alone the themes.

I am glad to see for once a thread not get deleted and actually being replied to that points out some of the problems with the way things are working now. Is there now an official way how we can post these issues or is it hit or miss if they get deleted or replied to? I’ve tried to discuss them on here (they get deleted), also on slack I was given answers (that technically did not answer my concerns) rather than have someone listen to my concerns and let me know my concerns would be considered for discussion (responsive vs considerate support).

These are my thoughts as a plugin developer who is not on the core team. Please take my thoughts and ideas into consideration. I think ideas and critical thought from outside WordPress core is necessary from time to time, like any project or application.

* Plugin developers should be trusted with their translations. This would solve a lot of problems. Yes there will be plugin developers who just use Babelfish or Google Translate, but I have an answer for that at the bottom of this reply.

Note: It is common for a company to pay someone to do translation work. In this example, even though I do not speak French, I may have the translations for French that my company paid for to put into my plugin. The current procedure prevents me from doing this translation when I ask for such access simply because the polyglots moderator says I do not speak French. (who owns the plugin and its’ translations?)
* Let plugin developers determine if a translation can happen or not regardless of percentage completed. I know if the developer doesn’t speak that locale language the translations may not be “correct”, but that is how it works today when not using After all we want to provide some translation rather than none. None is way worse than 50% translated.
* Provide a simple way for plugin developers to add/remove users as translator editors. It should be in the same place we add additional committers to our projects. Then the current method of posting a comment in a particularly specific way can end which is very frustrating. The latest example how to post requesting translators I saw wants you to praise the team in your introduction, this seems rather self-serving to me, almost as if to reinforce the system in place now is a good one.
* Translate editors should be strict. Meaning if I don’t add translator editors to my project, no one I don’t know is approving translations for my plugin without my knowledge. Yes I know the idea is to allow global translation editors can approve translations for whatever plugin, but this brings up legal issues (see my GPL comment below). Perfect world, the plugin author could simply check a box to say “my translator editors only” or “global translators allowed” next to each locale.
*Let plugin developers decide if a translation can even happen. There are situations where I may not want anyone to create a translation into a particular locale. For example, lets say we were going to launch a service in England in April, we may not want a version allowed to be translated until that release.
* Let the plugin developer have the ultimate say about translations. I may have my companies lawyer tell me the translation of a word we use as a trademark in the US cannot be used in France. That that point I as the plugin author need to be able to change that translation (not the translator editor I picked) Ocean90 pointed out some code I can add in PHP to do this but it did not quite answer this specific case, only how i can target locales.
* Ultimately, the translation is part of the plugin and thus the management should be in the control of the plugin developer. Not allowing the plugin developer to have this control means that derived works are technically branches or forks owned by someone else (other than the plugin developer). This brings up legal questions, is a translation where the plugin developer cannot modify or control the translation still part of the original plugin or is it a derived work (where other parts regarding trademarks and copyright need to be modified). With everything falling under GPL here at, it would mean that the inaccessible by the plugin developer translated plugin technically should use a different name and modify the copyright, and give credit to the original plugin developer to make sure not to imply the derived work was his/hers. This problem is solved simply by allowing plugin developers complete control of who can and cannot translate their plugin including themselves (see my first item above).
I believe the scale-able solution is to allow Global Translation Editors the ability to grade translations that have been approved by non global translator editors. This would be viewed as some sort of star rating or grade percentage next to the translation that then the user can decide if it is good or not. That would solve the issue about quality of the translators in a community driven organic way. Also solves the problem if only 90% of the translation is completed, automatically the grade would be at most 90%.
This is what I would like to have access to as a plugin developer. A place in the plugin’s admin page that would list all of the locales in a table form with options:
locale | translator editors | translation allowed by | translation grade | actions
en-us | user1 x, user2 x, … | () global translators () plugin translator editors () plugin author only | 85% of translations accepted by global editors | add translator editor
| – separates columns in table
() – radio button (defaults to global translators)
x – next to user name to remove translator editor
I’ve already had folks reply telling me this cannot be done for random reasons. I beg of the team to instead respond with reasons why it cannot be done, please consider discussing these ideas first.

WordPress GlotPress translate management confusing and is unscalable

Plugins and themes have translators we trust 100%, we’ve forged relationships with these translators over the years. I’m now finding myself telling my friends and colleagues “I trust you, but for some reason WordPress doesn’t trust me with my plugin”. This all stems from the confusing process that is in place now which can be easily fixed giving the plugin and theme owner direct control of their translators.

UPDATE: After speaking with team members on slack about the process, I’ve now learned that the terms used for translation have changed. These new terms are used by the polyglots team but have not been updated on the documentation, hence the confusion. First some terms to clear up…

Translator Editors != Translators – The plan is to let anyone be translators, but only editors could moderate and approve translations.

Translators != Validators – Same as above, translators can make translation recommendations, only validators can also make translations and approve translations.

The new latest process has in place to manage translations for themes and plugins is two fold. First there are “Translation Recommendations”, then there is “Translation Recommendations with Validations”.

Translation Recommendations

If a plugin or theme owner would like users to “recommend translations”, all the user has to do is create an account on Anyone with an account on has permission to “recommend” translations for WordPress core, as well as any theme or plugin. I am sure this will not last long, once malicious users get their hands on this the team will be forced to chance this policy. I like the spirit of this though, but they do not make this clear in the documentation. The way it is written currently and the terms used “translator” and “translator editor” implied to me that once the user created an account they still had to be added as a “translator editor”, but that is why they’ve changed the terms.

Translation Recommendations are placed in a queue to be confirmed or marked fuzzy by translation moderators (Translation Editors or Validators). Plugin and theme developers can request specific users to be validators for specific locales.

Translation Recommendations with Validations

If a theme or plugin owner wants specific users to “translate and validate” translations in particular locales, a request must be made in this comment thread (must include the word “request” in the tag field otherwise your requests go into a black hole) and request users to be “validators” for specific locales for your plugin/theme. Specifically you need to include the following in your posting:

locale code (en or en_US) – – slug name of your plugin

As of current, plugin/theme owners can only request validator users and must wait for the users to be approved. There is no simple add option like that for adding committers to your project. It appears they are approving all requests, but the process is taking very long to do so. As I understand it, they are not rejecting requests, plugin and theme owners can rest assured they have control of their translations if they request it.

How Translation should be for themes and plugins

Plugin and theme owners should have control who can translate their plugin/theme, which languages can be translated, and be able to add remove translators. Plugin and theme owners should be able to add themselves as validators for any or all locales as well.

Plugin and theme owners should have the ability to decide if a translator has “full translation control” (translator and validator) or if their translations should be moderated by someone else (other valdiators). I like the idea of anyone making translation recommendations, but there should be management screen with setting in a grid view where we as plugin/theme owners can decide if  locale can or cannot be translated as well as enable/disable if translation recommendations are from anyone, or allow fur adding specific users. A grid where the columns are “locale”, “enable/disable (enabled by default”, “Allow anyone to recommend translations”, “recommend translators”, and “translation validators”. Each row would have the locale. The cells within the “recommend translators” and “translation validators” would have a textbox with an add button to add validators, as well as list the current validators with a button to remove them.

These proposed options above would allow a theme or plugin owner to add someone as a translator exclusively, they will not have to be distracted by anyone on making recommendations. The plugin author can also decide if a particular locale should be available or not for their plugin. Defaults though should follow what has in place now which encourages anyone to contribute.

The idea that plugin and theme developers cannot control the process, and that translations happen without the interaction of the theme/plugin developer in my mind violates GPL and certainly creates issues for copyrights. Luckily, this is not the case. Currently the polyglots team is changing “terms” for translation management which lies the confusion.

Why do Theme and Plugin Developers need to be moderators of their translations?

As creators of a product, albeit free and open source, there are situations where words, phrases cannot or should not be translated. For example, a website name (not the URL), plugin author should be able to decide if it remains as-is in some languages but can be translated in others. Many of these terms are copyrighted and in other cases, are technology terms that if translated, may cause more confusion. It should be up to the plugin/theme owner to have final say.

There is a way in the code to prevent some parts of strings to not be translated, but it does not control this down to the specific locales.

Another problem comes with companies who have translators on staff and can do translations. Our company has to add our employees (which can change over time) to have access to translate. What was a simple here’s the po file go to work operation has now turned into “I have to wait for to approve our  our employee fluent in French to be a validator for our plugin”.

Plugin and theme authors should be able to decide which languages are translated. Though I wouldn’t use such a feature, I could see a company like Facebook prevent translations for certain languages/locales until they have the language locale added to their service.

Current process does not scale

The current process at is not going to scale to 40,000 plugins with 50+ translation possibilities each. I don’t care how many volunteers there may be helping with translations, adding a comment to a blog post it’s not going to work, we’ll see some plugins receive priority moderation (validating) over others, which will more than likely be influenced politically.

At some point a page for plugin owners wil lneed to be created that lets them pick a locale and enter a user and click ‘Add’. The faster and smoother they can make this process the less confusing it will be for all involved, and the faster new translators will start contributing.

I’m already wasted a few weeks trying to figure out how the process works, which appears is still being figured out, which also bothers me that these problems were not even discussed or thought about.

I want centralized translation to work

Do not read above and believe that I am anti WordPress or anti translation. I want this to work. To not trust the plugin / theme authors with the responsibility of controlling their translations is not only wrong, it poses the problems I describe above. This process needs to be fixed ASAP.

I would ask that you do not post comments here. Instead, I’ve opened a trac ticket requesting that this process be changed. Please comment there so WordPress translate team can read your concerns about how theme and plugin translations are currently managed. Update: My Trac ticket has been closed, feel free to comment below.

GetID3 analyze() function new file size parameter

You can now read ID3 (media file headers) information from mp3 and other media files using GetID3 library without having the entire media file present. The new 2nd parameter to the analyze() member function allows you to detect play time duration with only a small portion of the file present.

Years ago I added this code to the versions of the getid3 library we packaged with the Blubrry PowerPress podcasting plugin. I’ve submitted this code to the getid3 project so everyone can benefit. As of GetID3 v1.9.10,  you can now pass a second optional parameter specifying the total file size. This parameter sets the file size value in the getid3 object, skipping the need for the library call the filesize() function.

This is the secret sauce that allows PowerPress to detect the file size and duration information from a media URL of any size in only a few seconds.


The new parameter only works if the following are true:

  • Have enough of the beginning of the media file that includes all of the ID3 header information. For a typical mp3 the first 1MB should suffice, though if there is a large image within your ID3 tags then you may need more than 1MB.
  • Have the total file size in bytes.
  • The mp3 file is using a constant bit rate. This must be true for podcasting, and highly recommended if the media is to be played within web browsers. Please read this page for details regarding VBR and podcasting.

Example Usage

// First 1MB of episode-1.mp3 that is 32,540,576 bytes
// (approximately 32MB)
$media_first1mb = '/tmp/episode-1-partial.mp3
$media_file_size = 32540576;
$getID3 = new getID3;
$FileInfo = $getID3->analyze( $media_first1mb, $media_file_size );

You can use a HTTP/1.1 byte range request to download the first 1MB of a media file, as well as a HTTP HEAD request to get the complete file length (file byte size).

Byte range requests and HEAD requests are safe to use for podcasting. If a service does not allow HEAD requests or accepts byte range requests, then they will have bigger issues to deal with as these features are required by iTunes.

Blubrry PowerPress podcasting plugin has been using this logic to detect mp3 (audio/mpeg), m4a (audio/x-m4a), mp4 (video/mp4), m4v (video/x-m4v), oga (audio/ogg) media since 2008.

Not all media formats support this option. You should test any format not mentioned above. For example, ogg Vorbis audio works, ogg Speex audio does not.