If we deconstruct the concept of cloud services and reduce them to the simplest form possible, it will become obvious that the cloud is nothing else than someone’s computer.
A number of software instances, difficult to estimate, is launched on them, being received by the providers of cloud services from third party companies and, at times, from the Open Source community. What are the risks involved?
What lurks at the security of data in a cloud?
There is a popular belief that using cloud services allows delegating the responsibilities related to software security. The cloud is supposed to be a specific way of delegating responsibility: some believe that the infrastructure designed in such a way cannot be hacked. However, if we perform the aforementioned deconstruction, it becomes clear that the external servers are as prone to attacks, as the software that is launched on them.
The human component may also fail – while the Amazon Web Service or Microsoft Azure portfolios offer a lot in terms of automating administration, a large number of hazards is related to configuration flaws. Unfortunately, the forecast in this area is ruthless. Just as fast, as we have gotten used to data, applications, containers, and finally, the computing power, being given to us within the framework of external infrastructure, we will have to get used to a new kind of hazards that await our organisations and enterprises.
Cloud infrastructure being attacked at an increasing pace
According to Forbes analysts, by 2020 almost 85% of all enterprise workflow will be transferred to the cloud. Therefore, the interest in employing experts that specialise in similar migrations is no surprise. Everyone, who has had at least basic experience with Amazon Web Service, realises that the key role in the process is played by the comprehensive knowledge of the AWS offering. The number of offered services and microservices may be overwhelming to many.
Surely, there will be home-bred experimenters, who, without proper preparation, or at least a basic knowledge of the documentation, will attempt to test the novelties with the trial and error method, independently or using inexperienced teams. This phenomenon may dominate the overview of the cyber threats within the enterprise segments in the following years. According to Gartner analyses, by 2022 95% of all security violations will be the fault of users and not cloud service providers.
The result of such a situation may be catastrophic, also due to the specifics of using the cloud infrastructure as database storage. As a result, even the single tiniest security incident, e.g. an effect of a faulty configuration, or of using inadequate tools, may lead to a giant data leak. And the leak, especially if the incident is not properly managed, will expose the organisation to proportional fines, imposed by EU institutions and the GDPR directive.
Giant data leaks and Advanced Persistent Threat
For years, the problem has been distant, as cloud transformation was the domain of the largest organisations, while to others, it remained a catchy slogan repeated over and over in the industry media. At the end of the second decade of the 21st century, cloud migration is a universal, mass process. It is not hard to imagine that the belief regarding delegating the responsibilities for the security of the infrastructure will lead to subsequent incidents, respectively to the universality of the migration.
Obviously, the providers themselves do not remain idle in the face of similar forecasts, which can be demonstrated by various services launched along with the infrastructure that users receive at their disposal. Suffice to the name Amazon Web Service Security Hub, to realise that mechanisms already exist which are dedicated to the proactive recognition of bad configurations, as well as, of various other elements that may be a potential source of vulnerabilities qualifying for the CVE signature.
In the worst-case scenario, the exploitation of the aforementioned may result in an Advanced Persistent Threat breach that will last many months. Then, it is not only the security of the databases, and the risk of their leaking that are at stake, but also all workflow that is processed by the organisation on external servers. The one who executes an APT attack will obtain incredibly vulnerable information, including priceless know-how. It is not without a reason that APT is listed next to global ransomware campaigns and the supply chain takeovers, as the greatest threats for the IT enterprise environments at the time.
Bad service configuration is a repeating problem
The aspect that cannot be omitted is the issue of the repetitiveness of cloud service configurations. A known example of the issue was the data leak after gaining unauthorised access to the infrastructure of the Capital One bank holding company. In July 2019, as a result of gaining unauthorised access by third-party individuals, data of 100 million Capital One customers from the USA, and 6 million customers from Canada, was stolen. It is estimated that it was the largest data leak in the American financial sector in history.
The case has become an object of an FBI investigation. It was determined that the incident was a result of a faulty AWS S3 bucket configuration. Losses resulting from the Capital One leak were estimated at approximately 150 million dollars. However, the most striking thing was the federal investigators’ report stating that this particular bad S3 configuration was used in at least 30 other organisations that often process extremely vulnerable data.
Cloud data security – what can be done?
One cannot say without a clear conscience that the new cyber threat reality must be tackled with new means. Quite contrary, the implementation and the meticulous realisation of the norms, accomplished already at the local level, may significantly influence the softening of the disturbing trends. However, the risk will grow at a drastic pace, if we are to fall prey to a belief that by delegating data to the cloud we delegate the entire responsibility for its security to the cloud provider.