Back when the public cloud was some mythical beast in the eyes of SMBs and Enterprises, one of the most often quoted reasons why companies were hesitant to adopt it was the security factor, or rather the lack of it. But as the years went on, the providers started collecting all of the industry standard security certifications, cloud adoption grew, potential consumers finally started gaining trust and the public cloud providers started to be seen for what they really were – organizations who were serious about infrastructure security, automation and having huge ambitions to make the classical company on premise datacenter obsolete.
Traditionally very few companies in the SMB segment have had the organizational resources and policies in place to make sure that no unnecessary firewall rules are left open, all data is encrypted in transit and at rest, all device firmwares are up to date and all users have only the permissions that they need and not a smidge more. These things, while in reality hugely important, was often being seen as somewhat of a burden, which organizations would rather not do.. And, honestly, setting everything up “the right way”, especially if years of negligence was at play, is difficult.
With less and less companies now actually willing to invest into resources of having their own physical IT infrastructure as well as taking care of it and its security, the move to public cloud is now often seen as the most rational approach by management, which should resolve most if not all operational burdens. After all, you know that the leading cloud providers now have over 10 years of experience in hosting and securing their infrastructure, you know that they get audited frequently by a dozen of different parties and you know that they will not hesitate to patch the servers that are serving you, when the successor of Spectre shows up. You know they are serious about that.
Ironically, once the fear about inadequate security in the public cloud was gone, overly positive assumptions started taking place. Since the cloud is so secure, surely you can move your data there and greatly improve your security. Or can you?
From an entirely low-level infrastructure perspective that is undoubtedly true. Companies like Microsoft, Amazon and Google have best in industry datacenter infrastructure with top notch physical security measures, spend extensive resources into developing custom security-oriented silicon, and fix any potential security issues that they might encounter with third party vendors.
But with all that you can still periodically read news about how one or the other company, that hosts all of their infrastructure in AWS or Azure, leaked large amounts of customer data to the public or was breached. While the headlines often seem to attribute a particular breach to a cloud provider’s name, after reading the text it becomes clear that a breach happened due to a misconfiguration from the client’s side. Which brings us how cloud security is a lot different to security in the cloud.
There are various different levels of cloud services available from the cloud providers and dozens of new ones show up every year. The most commonly known classification of these services are IaaS, PaaS and SaaS, varying in the way of how much control and responsibility the client has over the services. From your virtual servers, to email services, to your subscription-based CRM application, they can be categorized by where the responsibility of the provider ends and where the responsibility of the client starts. And this can often be overlooked when moving from a traditional on-premise infrastructure environment.
The traditional approach to adopt cloud technologies usually involves a gradual migration of IaaS workloads in combination with some higher level adoption and further modernization once the primary migration is over. More often than not, a lot of more complex systems are being “lift-and-shifted” from on premise to the cloud, sometimes motivated by an ending contract with the colocation provider, the ending lifecycle of the server infrastructure, with the hopes to do a replatform or redesign of the solution to make more use of the services available in the cloud and make use of more cost-efficient service. Depending on the resources at hand this actually could be the best solution that can be made in a particular situation. Where a common pitfall may occur is if the organization starts viewing the infrastructure as just another colocation provider.
If the company was patching their servers on a regular basis on premises, they will most likely continue to do so after the lift and shift migration. If that was not the case – changing the location of where the virtual machines run will most likely not change their work practices. In both scenarios, however, a new security vector – the cloud management plane is introduced. Your traditional virtualization hosts were accessible only from a dedicated spot inside your network, and a tightly controlled VPN was used if access to the infrastructure management was required remotely. But since the public cloud is, well, public, you have the ability to log on to your infrastructure management portal from anywhere in the world by default, so having things like auditing, strict permission limitations for people who have access to certain resources within the cloud and MFA enabled.
Another problem arises when new workloads are being provisioned and higher level cloud services are being introduced. Cloud providers want you to consume their services – it’s what gets them revenue. And it is in their and in the client’s best interest to do so easily. The problem with that is, that it is easy to cut some corners, which brings us back to the potential misconfiguration issues, which is the main reason why public cloud based infrastructure gets compromised. The cloud vendors do have extensive best practice documentation, they started building in warnings and safeguards as well as automated tooling which helps individuals and organizations not to make the most common mistakes which could compromise their security, but in the end it is the customer, who has the responsibility to protect their data, which is beyond the responsibility of the cloud provider. You will get a warning if you want to want to open your Windows machine RDP to the whole Internet, you will get a warning when you create a publicly accessible S3 bucket, but the cloud provider will not stop you if you insist that it is what you need. And when you try to combine multiple cloud services together for your solution, you have all the flexibility that you might want, but at the same time you might potentially be creating even more potential attack vectors.
Your solution in the cloud is as secure as you make it. While the cloud provider might do its best to build a secure foundation for it, it is the responsibility of the consumer to keep building up in the same secure way. With the right knowledge that is going to be far faster in the cloud than anywhere else, but without it, it’s just another chance to challenge your brand reputation and risk your customers’ data.