Cloud computing: Understanding and reducing costs

The costs of cloud computing depend entirely on how well your own requirements match the solutions and price model of your provider. Find out in this overview which factors you should take into consideration in your cloud strategy in order to plan your costs efficiently.

Companies expect many benefits from the cloud, and rightly so: they want to create new business models and products or support their employees in their day-to-day operations with a more effective infrastructure or more up-to-date processes. There is often one objective at the very top of this agenda: cost reduction and transparency.

A lot of what is on offer sounds very appealing: less idle time, no unnecessary waiting, no high investment in your own hardware, no worries about expensive downtimes. Transparency and long-term planning of cloud costs are rather more complicated, however. In the IDG study “Managed Services 2020”, 38 percent of respondents say that cost transparency is an essential part of their business relationship with their provider. In the same study, moreover, 64 percent of (IT) decision makers surveyed state that cloud services contribute to cutting costs.

To achieve their cost targets, many companies want to move as much data and as many processes and workflows to the cloud as possible. The motivation behind this is legitimate: cloud systems are unbeatable when it comes to agility, flexibility and scalability. But that does not always mean that they represent a cost saving per se.

Management of clouds is a complex business, especially where multiple cloud environments are involved. Companies which operate workloads for machine learning and big data analytics in the cloud, for example, have entirely different requirements from a company that simply wants to use the cloud to hold its data. Managers should always weigh up the following factors when it comes to keeping a handle on the costs of cloud computing.

Keeping cloud computing costs under control

One of the most common causes of unnecessarily high cloud costs occurs when usage is not an ideal match for the infrastructure. You must be aware of what specific use you want to make of the cloud. Which systems should operate in the cloud and why, and which solution suits them best? This determines which datasets are relevant. To what extent should unlimited archive data be kept in the cloud, for example? Does every sensor parameter of a production system really have to be backed up in the long term?

Archiving data

Data that is not changed regularly can be kept in an archive system, for example. This is the first way to make savings. An archive solution such as Object Storage is comparatively cheap and is the perfect place for large quantities of unstructured data that are created in certain areas of application, such as industry. But archives and backups should also be kept there.

Optimizing storage

Data that is accessed regularly, to run applications for example, is a different story. Here, capacity and security are important, so higher quality storage (usually “file storage”) is required. Simply being aware of the access times required can save money in this context. Many companies create an audit-compliant archive system on older hardware to save on costs, so that they can keep the cloud solely for performance-hungry productive systems.

Lowering cloud computing costs with the right cloud

Cloud computing has a reputation for reducing costs and being able to adapt to the latest requirements. That’s not wrong, but it really applies only if the services have been designed, planned and implemented appropriately from the outset. The cloud can also have the wrong dimensions or an inappropriate design – if, for example, you choose a provider or service with a billing model that does not match exactly with your workloads. This can drive costs up through unnecessary traffic and high storage requirements.

Compare the billing models of the clouds

Among the undoubted advantages of the cloud are its on-demand capability and scalability. Resources such as storage, computing power and networks can be increased entirely to meet demand and can be used temporarily, to set up a development environment for example. The cloud capacity – or the computing power– does not necessarily have to increase with the number of machines or systems in the company. Cloud costs are often calculated from the peak load or the specific services. The billing models vary significantly from one provider to another. It’s important not only to find the right solution, but also the right model.

If the load and the payment model are incompatible, cloud computing costs can become surprisingly high. Some providers bill according to storage space, others by traffic, others again by computing load or the time for which a service is used. For example, heavy traffic between individual nodes, edge systems or to CDNs can soon becoming expensive, depending on the billing model, and wipe out any cloud cost savings. This is the case if large volumes of data are moved between nodes for scaling.

Important questions relating to cloud costs

Billing model

Does the billing model suit the planned workloads and architecture?

Distribution of workloads and data

What has to move to the cloud, and what is more cost-efficient elsewhere?


Can the Service Level Agreements be customized?

Insight and overview

Do I have the right expertise or support for planning, implementation and operation?

Cloud computing costs: Finding the right architecture

Not all computing work always has to be carried out in the cloud. As with the example of archive systems that do not require a high-performance cloud, in-house services can cut cloud computing costs. Edge and fog computing from the field of IoT are good examples of this. Here the data is first processed locally, before being sent to the central cloud systems. This reduces expensive, permanent data streams and computing requirements. This pre-processing reduces the data load as a filter and helps to reduce the cloud costs.

In addition to scalability, ready-made platform services and APIs are among the biggest advantages of the cloud. Hyperscalers at first glance offer suitable platform services for every component of applications, offering a home for databases, computing instances and storage for example. However convenient and fast this may be, companies can become dependent on the provider’s environment for their applications or rely on expensive services with too many functions that are not required for the intended purpose.

Cloud services such as pluscloud open, which are based on open source technology, do not tie companies in to proprietary cloud technology. This is a way round the so-called vendor lock-in. As a cloud provider, plusserver also offers comprehensive advice on choosing the right architecture. This may be a multi-cloud, to cope with the variety of different workloads in the company.

Optimizing cloud costs: The right way to negotiate SLAs

Anyone who operates their business model in the cloud expects their provider to keep it available at all times. To guarantee this, legally binding Service Level Agreements (SLAs) are put in place. Such contracts regulate the scope and obligations of the cloud services. Typically, they cover things like availability of the cloud, performance, response time in the event of a fault and compensation if the provider breaches the agreement. The calculation seems quite straightforward at first sight: the greater the requirements set out in the SLAs, the more expensive they are.

Customized vs. standard SLAs

But there are two aspects that are often overlooked. The first relates to the provider itself: while local cloud providers usually negotiate their own SLAs and are based on mutual agreement, the hyperscalers often use non-negotiable standard SLAs. This can mean, for example, that the provider does not have to pay any compensation for breaches of the agreement, even if there is a serious failure.

The second aspect relates to the specific use case. Companies should define for themselves how strictly the guarantees in SLAs are to be defined. Let’s take the example of availability: the minimum standard in cloud computing is 99.9 percent, calculated over a year. That corresponds to a downtime of around nine hours in the year; with an availability of 99.99 percent, you get an extra 50 minutes or so. The magic line comes in at 99.999 percent. This is referred to as “high availability” – with only about five minutes of downtime in the year. Depending on the use case, however, it is certainly not always the case that maximum uptime must be guaranteed at all costs.

Weigh up where high availability is really necessary

What might come highly recommended for particularly critical systems can incur unnecessary costs in other cases. For example, an internal tool like an analytics application can be run with a higher downtime tolerance than customer-facing applications such as an online shop. The same usually applies to archive systems and other data stores that are not always required in productive operation. To cover yourself, it is always advisable to sketch out your own SLAs yourself and then compare them with those of the provider. That not only avoids frustration over high cloud computing costs, but also shows the company itself what really matters to it in the context of the project.

Understanding cloud computing costs and remaining flexible

Without experience, maintaining an overview of the cost structures is a real challenge. The fact that price models are always changing just adds to the difficulties. The cloud market is always moving and providers push different cloud models. Bu what happens if the cloud model changes? Even if companies have their own in-house cloud experts, it does not mean that they will also take account of the commercial processes. The fact that such considerations are not just theoretical is demonstrated by the case of a well-known hyperscaler: in 2020 alone, it changed its billing system twice. If you are not up to speed with all this, new bills can be an impenetrable book.

Without the necessary scope for flexibility, you can quickly find yourself tied to a provider (vendor lock-in). The main way round this is a sound multi-cloud strategy and the use of tools such as Kubernetes, which allow you to shift workloads, data and applications between individual clouds programmatically. As with energy providers, the rule should be: if the cost-effectiveness does not meet expectations, decision-makers should be able to switch quickly and easily.

Partners can help with cloud costs

It can also be helpful to have a partner who can advise you. They will be familiar with changing conditions, understand the terminology and cloud models and therefore be able not only to make recommendations for cost savings, but also to deal with the contract management. The multi-cloud data service provider plusserver has even introduced a multi-cloud tariff. This not only provides planning reliability through fixed budgets, but also retains flexibility of choice and allows you to switch between clouds.

A partner should not only be involved in the cloud migration, but ideally be experienced and offer support with the continuous optimization of the long-term costs of cloud computing too. With relevant project experience, cloud partners can assess requirements better and make sound recommendations. Cost drivers in the cloud can be identified together and averted in this way, allowing cloud projects to continue to be successful beyond the trial phase.

We at plusserver will be happy to advise you on any questions you have about cloud computing costs. Simply get in touch with us.

Contact us to learn more

Hallo, wie kann ich Ihnen helfen?