The latest news from Meantime IT

All Meantime news

ISO27001, Hosting, and what Meantime does for you

We recently made a number of changes to our infrastructure. Not uncommon in IT by any means. We communicated this to our clients and, to our surprise, one of them asked us what exactly it was we had done, and why.

To be clear, this wasn't a 'stupid' question, but one that made us realise we don't really do a very good job of explaining just how much we do for our clients, aside from the very 'visible' changes to software and hosting their applications. Like an iceberg, that’s only a small part of what we do (albeit a vital one!).

ISO27001 accreditation

Our ISO27001 accreditation [LINKS] means we know how to take care of data. How we do that though involves many moving parts. We are (warning, mountain of acronyms to follow!) a LAMP business. That stands for Linux, Apache, MySQL and PHP. The 'Linux' element relates to the actual software our servers run on, a bit like Android or iOS for your phone. This turns a computer into something more akin to a server. The particular Linux software we use is called Ubuntu. Next is 'Apache'. This is the bit that makes the 'server' a 'web server'; without Apache, it would just be, for want of a better analogy, like yours or my desktop / laptop machines. PHP is our coding language. This is the language we use to actually write your software applications (yes, other languages are used too, like HTML and JavaScript, but I'll spare you those details here!). PHP is open source, meaning you don't have to pay for the privilege of using it. Some of the most well-known websites in the world use PHP - you may have heard of Facebook perhaps. Lastly then is MySQL. This is our database technology. This is where we store all the information from your applications, on a 'schema' we devise specifically for your business (schema in this sense just means we have a 'diagrammatic' representation of where MySQL has the data and how it all relates).

Protecting against threats and vulnerabilities

So, all that being said, what does any of that have to do with ISO27001 and Meantime? Well, as you may be aware, there are threats and vulnerabilities that people (hackers) can employ to take control of, or otherwise disrupt, a server. They can do so by trying to exploit weaknesses in your applications, but they can also do so by exploiting weaknesses in the underlying architecture, for example a flaw in PHP, MySQL, Apache, or Linux. This happens *all the time*. The majority of these attacks are by 'bots', or automated machines. They scan the web and automatically attempt to hack into a server. In the vast majority of hacking cases, the physical hacker i.e., a person, doesn't get involved until their bot software has told them they have found a vulnerability. To give you an idea of the sheer volume of bot traffic (not all of which is malicious mind you), a report released recently by Incapsula, a cloud-based web-security service, found that 61.5% of all website traffic now comes from these non-human visitors!

To protect against all of these machine-based attacks, vendors release 'patches' for their particular piece of the puzzle. So, Ubuntu will release patches (bits of code that close or stop a vulnerability, amongst other things), as will PHP, MySQL, and Apache. We install these patches on our development and user acceptance testing (UAT) environments, and when we know they are stable, release them to the live servers. That part - testing it first to make sure it works - is part of our ISO27001 policies. You would be surprised how many people don't actually test patches at all! Typically, we can apply hundreds of patches per month.

Upgrading our database and code version

Now to the ‘meatier’ parts of the things we do behind the scenes. We recently upgraded our MySQL database to version 8. This is similar to upgrading your favourite app in some senses - things you got used to have moved about, or features have been removed or changed altogether. That’s rather inconvenient when it’s your shopping site of choice, but catastrophic when it’s the method by which we store or retrieve your data. And so, just like with the patches, except at a much more detailed and granular level, we apply the changes in the test environment, and then set about updating any functions or calls that no longer work, before we apply this to live. This can, for larger systems, take us weeks. As PHP is a language, this also has the same upgrade path as MySQL and, having also upgraded our PHP version from 7 to 8, this required the same level of testing and fixing. Overall, for all our clients, this took us three months. Think of it like Oxford updating the dictionary - it still contains English, but some words have fallen out of use, or new words (YOLO anyone?) have been added. Why do we do it then? Well, it all comes back to that pesky patching. Vendors stop supporting older versions of languages / platforms over time, and so if we want to make sure we are protected against the latest vulnerabilities, we have to be on a version they are still making patches for. Our latest move and the enterprise agreements that come with it guarantee patching for the next 10 years. In the case of PHP and MySQL, we also want to make sure we have access to all the latest 'goodies' the developers have released, so we can use them to build you better web applications.

Over the course of the last 16 years, we have done this kind of full 'stack' (that’s all components of LAMP) upgrade 4 times. You probably wouldn’t have known about that, seeing as we didn’t make a big deal of it, and just assumed (incorrectly!) that everyone knew that we were doing them!

Move to Microsoft Azure

Lastly then, to mention the servers we use. The server hardware (okay, so it may not exactly be physical hardware anymore, but I'll get to that!) plays a vital role in what we do. It has to have enough processing power to run the applications we write, but also lots of 'spare' power to deal with lots of people trying to do lots of things at once. It's a bit like a motorway - you want lots of big, wide 'lanes' to deal with lots and lots of traffic all going in the same direction, otherwise you end up in a traffic jam. Long ago when I first joined the company, we used 'physical' servers. These lived in a room (rack) and belonged to us. They had issues like a hard drive could fail, or if you needed more RAM (memory) or cores (processing power), someone had to physically add them to that device, it needed to be shut down, and restarted, all assuming it was even possible to upgrade it. These are what we refer to as ‘single points of failure’ – if there was an issue, then there was no contingency. After this, we moved to something similar, still physical boxes, with UK Fast, but these had fail-over and more power.

Fast forward to the early 2010’s and we moved to Rackspace. This architecture was much more powerful, had several data centres (locations where the servers ‘live’) and so if one data centre failed, the other would automatically kick in and take over delivering our software to the web. At Rackspace, we got our first experience of ‘virtualisation’. This is where components don’t really exist as physical things; they are instead ‘shared’ machines, where software mimics them being ‘real’ devices. There were still elements within the Rackspace environment that were not virtual; the RAM and CPU were clustered together to make the ‘server’ and then the storage made use of virtual technology.
Our latest and greatest server move has been to migrate to Microsoft Azure. Here, all of the technology is virtual. CPU’s, RAM, and storage is all on ‘shared’ servers (still all in the UK) and none of the devices are physical anymore. In theory then, as the server doesn’t really exist, if a component failed e.g., a hard drive or CPU, it didn’t matter as it would just ‘borrow’ one from somewhere else. This was evidenced recently when Microsoft noticed an issue with some of the devices where our virtual environment lived, so it just rebuilt the server somewhere else; in six seconds!

The Azure servers also use something called ‘burstable’ technology and this, we think, is very clever. When we configure our virtual devices, we have to say how much RAM, and how many CPU’s we want. In theory, the more the better. But the more you ask for, the more it costs, and they are not shy in charging for this! So, what burstable technology gives us is a ‘baseline’ server – the minimum we want to have running the majority of the time. Then, when the server gets put under load, it ‘bursts’ out of that baseline and borrows more power temporarily; in our case, up to 160% of its baseline capacity! This ensures that even on a busy day, we are operating well within our means, but on the occasions where we need to do some heavy lifting, we can call in virtual reinforcements.

The Meantime commitment

And there you have it; from patching, versions of the languages and software that run our platforms, to the devices on which all that operates, Meantime are constantly making sure your software is safe, reliable, fast, and available. And, would you believe, this is just a brief glimpse into what we do for you on daily basis, all wrapped up in our ISO27001 certification and commitment to keeping your software running at its best.

Because everything we build is bespoke, you might not see exactly what you need. If that’s the case please get in touch. We'd be happy to discuss how we can help you to take the first step to cutting costs and growing beyond all expectations.