AWS Certification Meetup & Head of Innovation

Really excited for the AWS summit that’s coming up in the next 2 days, I went to the certification appreciation event at the all hands brewhouse in Darling Harbour. I thought at the least it will be some decent free food and drinks… but I was pleasant surprised!

Oliver, the AWS Head of Emerging Technologies AP was there to give a brief preso about machine learning and AI, which was in itself interesting – but I had an equally long talk to him to delve deeper into this stuff and how it’s going to apply to web applications. His takeaway was that machine learning was going to become the new norm, whereby it will be embedded in all new software because of so many opportunities that it has to add value.

We also talked about AWS translate (which was released about a week ago) and some principles surrounding AI translations, where it’s at, and where it’s going. Overall, an amazingly interesting chance to talk to someone who’s clearly very smart. Not often you get the chance to do that after work over a beer!

Big thanks to AWS for flying over these brilliant individuals to Sydney for a few days.

How to migrate domains between 2 AWS accounts on Route53

I had an interesting problem to solve yesterday. Essentially I needed to migrate a domain (ie. hosted zone) from one AWS account to another. This isn’t an all-together complicated act, however the biggest problem is migrating the DNS records – particularly if you have a lot of them.

Unsurprisingly, the AWS console makes no attempt at allowing you to export and import a common format of zone file (in fact there’s no export at all). I figured, surely this would be possible using the cli tools… however unfortunately that is not very straightforward.

It is possible to export DNS records for a domain using the aws cli, and it is possible to import them as well, however the structure of the 2 files is a bit different (almost like the two functions were designed by 2 entirely separate teams, who were competing or something!). So what I did was write a simple PHP cli script to convert from the export format to the import format.

To do this process, I recommend you set up profiles for both accounts in your ~/.aws/credentials file. Then follow the following steps;

Instructions

Firstly, for both accounts, get the hosted domain zone ID. You can do it using the following command:

You want the 12-16 character alphanumeric ID of the zone associated with the relevant domain name. Change the zone ID in the following examples as necessary.

Then on the source account, you want to export all DNS records with the following aws cli command, which puts them into a file called “source-records” in the current directory.

Now you want to manually edit that file, and just delete the 2 entries equivilent to the default ones AWS adds for every new domain – the nameservers and root SOA record for the domain. Because these exist already in the new domain route53 setup and are different to the old ones, you do not want to import them. Since this is much simpler to do visually, I didn’t want to bother automating this in the script.

Then, run the converter script on that file.

And then, use the aws cli to import the converted file into the new zone (note this assumes again that the record file is in the current directory)

All done! Really hope this helps someone, as I couldn’t find any guide on how to do this online.

Great AWS Summit

Me and a few colleagues attended the AWS 2017 Summit in Sydney earlier this week. It was a pretty cool event – even if a bit packed on the first day.

Not only were there a lot of interesting things to learn, but I also found out about some AWS vendors which could quite potentially be very useful for some of our clients.

All around, really great event. Especially the brownies.

Chrome 56 : Gamechanger for SSL

If your website has any forms related to payment or login, Google Chrome will start to present a warning to it’s users when they browse the website.

This is quite a game-changer, because the traditional rule-of-thumb is that you should use SSL when payment details are accepted, but not necessarily for any other reason (this is for your run-of-the-mill sites of course).

This might have been an unprecedented inconvenience for a huge amount of website maintainers and a huge boon for SSL sellers – however luckily Lets Encrypt has come onto the scene recently, and has now matured to a point where it’s proven to be highly reliable and robust.

I’m using it for most of my websites which do require any sort of SSL (such as this very blog!), and I highly recommend everyone who doesn’t use SSL, and is worried about the implications of this, to start using Lets Encrypt.

Missing Auth header in Apache

For some reason Apache 2.4 (and maybe earlier versions, but nobody should care about them) drop the Auth header. No idea why, but here’s the solution:

CDNJS

Forget about Google’s hosted libraries and say hello to CDNJS. I can’t believe I’ve only now found out about this!

Elevator pitch:

  • Has servers in Australia wheres Google doesn’t, so latency is an order of magnitude faster
  • Has tonnes of javascript, css and other frontend libraries – much larger selection than google
  • List of libraries is community managed via github

Those are pretty much the key points. Read more here, and it’s located at cdnjs.com.

My own testing confirms the massive latency benefit over Google – just check out this download of jQuery, 0.9 seconds for Google versus 0.06 seconds from CDNJS. That’s a crazy big difference!

If you’re supplying Australian customers, you would be mad not to use it.

Running PHP through PHP-FPM with Apache 2.2

Running PHP through PHP-FPM is pretty easy in Apache 2.4 (or Nginx for that matter) with ProxyPass, however Apache 2.2 has no built-in convenient way to do it.

There’s a number of solutions that exist to accomplish this, but it seems that the ambiguously named mod_fastcgi.c is the least bad. One of the biggest issues I ran into while setting it up, was allowing for multiple fpm pools for the different virtual hosts, and the specific set of configurations that I’ve figured out allowed me to do this quite easily.

There’s are number of intricate things which must be done to get it working correctly, and while I hope that I nor anyone else ever has to do this again – let’s face the bleak reality, it’s not entirely unlikely that we will… so here’s how we get it going;

  1. Download and compile (yep) the apache module from their website. I’ll spare the installation instructions since they are described in detail in the module files. Ensure you have the httpd-devel or equivalent package installed before commencing.
  2. In your httpd.conf, surround the configuration include directives with the following code:

  3. Inside your virtual host definition file, but before and outside of the definition itself, define the fpm server like so:

  4. Then define it’s use inside the virtual host definition:

Hopefully those configurations are all that you need. As always, different pools will require different names (in this case mine is called prod) and ports. Also note the directories, and ensure they exist – particularly /var/lib/php-fpm and /var/lib/httpd/fasicgi .