PostgreSQL vs MariaDB – a recent experience

In developing API-centric backend applications, there are two main points of contention which make me want to use PostgreSQL over MariaDB (or MySQL) right now.

Native Support for the UUID data type
Why is this important? Well, every backend system should only make it’s users (be it an Angular frontend or a mobile app or a 3rd party consumer) aware of the UUID of a given object. Internal incrementing IDs are not the business of the consumer of the API, for all sorts of security and privacy reasons. Plus, dealing with UUIDs allows API consumers to create related objects themselves including their UUIDs and send them straight to the API all together.

To that end, MariaDB simply has no nice way of dealing with them. Performance wise, the best thing to do is to make them a BINARY 16 length field. However, this will make them unreadable in any sort of interface, and certainly not editable – which is simply unacceptable for development and testing. And yes, I am aware it’s possible to filter them down to readable format, but who the hell has the time to do that everywhere and everytime you need to see a UUID? Ridiculous!

That just leaves the option of making them a char 32 length field – but that has a significant performance drag over binary (which is slower still than an integer field already!).

Enter PostgreSQL. It has a naitive UUID field whereby it is internally stored as a binary for maximum performance, but as far as it’s displayed to the user, it will in all situations act like a character field type. This is precisely the sort of functionality developers want in modern applications. I know it’s on the bucket list for MariaDB, but who knows when it will be implemented.

Horrible Performance for mass inserts
InnoDB is a terrible engine – no two ways about it. That’s why MariaDB devs are working furiously on XtraDB, to replace it. Meanwhile, one of the biggest problems users are stuck with is horrendous insert performance.

Let’s say you have an application that is constantly receiving a large number of records, which must all go into a certain table. Sure you can store them in memory as they come in – concurrency in the application itself need not be a problem. But what happens when you have too much data for InnoDB itself to insert in a timely manner? A really big problem is what.

This is another use case where PostgreSQL simply does not suffer from the same performance issues that InnoDB does. And don’t even dare to mention MyISAM, you may as well use floppy disks if you want to go down that route.


Forget about Google’s hosted libraries and say hello to CDNJS. I can’t believe I’ve only now found out about this!

Elevator pitch:

  • Has servers in Australia wheres Google doesn’t, so latency is an order of magnitude faster
  • Has tonnes of javascript, css and other frontend libraries – much larger selection than google
  • List of libraries is community managed via github

Those are pretty much the key points. Read more here, and it’s located at

My own testing confirms the massive latency benefit over Google – just check out this download of jQuery, 0.9 seconds for Google versus 0.06 seconds from CDNJS. That’s a crazy big difference!

If you’re supplying Australian customers, you would be mad not to use it.