While operating only in the cloud offers seamless and seemingly infinite scalability, maintaining critical data and workloads on-premise or in a private cloud remains the preferred option for many companies in terms of security and compliance. High investment costs and long amortization periods for on-prem hardware and infrastructure have resulted in hybrid set-ups where non-critical data and workloads are farmed off to the cloud to free up internal capacity for critical tasks. However, such piecemeal hybrid scenarios do not offer a solution for when more capacity is suddenly needed on-prem or in the private cloud. A surge in demand on the company website as a result of a marketing campaign, for example, or doing big data analytics with internally stored data, will either be throttled by the lack of internal capacity or it will need to burst its banks. The answer is Cloud Bursting – the dynamic scaling (or “bursting”) of IT resources from on-prem to the cloud for temporary peaks in data traffic or compute.
Cloud Bursting, a use-case of Hybrid Cloud, differentiates itself from standard hybrid scenarios in several important ways. Firstly, it will not function effectively without careful planning of network capacities, and, secondly, it entails several challenges which need to be overcome relating to latency, interoperability, and cloud egress costs. However, properly implemented as an on-demand practice, Cloud Bursting will pay dividends through significant cost reductions, because you only need to pay for the additional resources that you actually use, and these are only used at peak times. Alternative approaches are either to invest in on-prem infrastructure dimensioned for the largest peaks a company might experience (a very expensive undertaking) or having all resources already in the cloud and scaling the cloud when necessary, something that not all companies are willing to do.
Cloud Bursting can be set up to take place both manually and automatically, depending on the use case. Take an e-commerce platform or webshop as an example: the burst could be implemented manually in advance of marketing campaigns which are expected to lead to increased demand. It can also be automated so that as soon as a certain threshold is exceeded, it triggers the bursting application to move the workload to the cloud. As soon as traffic normalizes, the additional resources in the cloud can be decommissioned either manually or using automation, and workloads can return to their standard on-prem or private cloud environment.
Cloud Bursting Myth No. 1 – you can’t connect to the cloud in low enough latency
In the case of a webshop, it is also possible to separate the web interface from the application logic and data base, and only burst the tiers that are less business critical to the cloud. However, this poses the challenge of ensuring the good performance of the application, as it needs to access the on-prem data in real time, and thus requires low-latency connectivity. But is it even possible to achieve low latency when connecting to the public cloud?
How companies connect to their chosen clouds is unfortunately often neglected in the formulation of a cloud strategy, but for Cloud Bursting it is an essential component of the infrastructure planning and design. Because Cloud Bursting demands low latency and high bandwidth connectivity, the path that companies very often take to the cloud – over the public Internet – is appallingly insufficient for the task. The best way to ensure low latency connectivity to the cloud is to connect the company infrastructure directly – and with sufficiently dimensioned network capacity – to the clouds in question using the private connectivity solution that each cloud service provider offers. This can be implemented via a connectivity solution provider, but if more than one cloud is being used, an interconnection platform or Cloud Exchange is likely to be a more efficient option. The result is dedicated cloud connectivity with guaranteed capacity, also ensuring the security of the data being transferred. Making sure the bandwidth can be scaled as required means that not only will it be necessary to appropriately dimension the network capacity, but also any hardware and connectors between the two networks.
With low-latency and high bandwidth connectivity in place, other use cases for Cloud Bursting can come into play. These include data analytics on sensitive (live) data stored on-prem. The analytics will generally require the extra computing resources of the cloud, and demand real-time access to company data. Given that such analytics are not generally carried out continuously but rather sporadically throughout the year, using resources on-demand significantly reduces the outlay in comparison to maintaining such cloud resources permanently available. Similarly, the operationalized training of AI models can be implemented on a monthly or quarterly basis, making use of ever-growing data volumes from the on-prem infrastructure. The tactic can also be used for the occasional stress-testing of software with real-life traffic volumes or loads before roll-out.
Cloud Bursting Myth No. 2 – cloud egress fees make cloud bursting costly
There are some Cloud Bursting use cases where it would make sense to burst entire workloads to the cloud. The advantage is that workloads can operate independently of the infrastructure environment. However, moving them can require enormous bandwidth. For example, workloads can be burst to the cloud as a partial data center failover in the event of an outage, bringing the offline site into the cloud until normal operations can be resumed. Any new transactional data collected by applications or any state changes occurring during the burst then need to be reintegrated into the master workloads when operations return to normal.
Migrating data to the cloud is relatively straightforward. However, taking data back out of the cloud when you retrieve the results or decommission the cloud can lead to a painful hike in cloud egress fees. This is also the case for bursting any outbound-heavy applications that send much more data to customers than is received from them, for example, especially if the data needs to traverse the public Internet.
For such scenarios, the cloud provider’s private connectivity solution again comes into play. This is because the pricing for cloud egress is much lower over the direct connectivity services than over the public Internet. So much so that if there is more than 25Mbit/s of traffic being exchanged, the private connectivity service pays for itself.
Cloud Bursting Myth No. 3 – there’s no interoperability between private and public clouds
The direct connectivity is, however, only part of the solution to making Cloud Bursting a success. The uncomfortable truth is that clouds were not designed to interact with each other – and regardless of whether we’re talking about a private cloud or other on-prem infrastructure, they cannot easily interoperate with the public clouds. Some cloud service providers also offer private clouds with built-in interoperability to their own public cloud, however, taking this option leaves the company locked in with that provider. Another option is to add a layer of translation between the different infrastructures in the form of a cloud router. This can be a separate device installed at the interface between the company infrastructure and the private connectivity service, or it can be a software-based cloud routing service, as offered by some Cloud Exchanges. This ensures interoperability on a network level, allowing a seamless transition between the on-prem infrastructure and the public cloud infrastructure, a prerequisite for successful Cloud Bursting. In choosing a cloud router or cloud routing service, however, it is important to check whether it can be scaled to the bandwidth capacities potentially required – otherwise it may become a bottleneck that will negatively impact the latency of the connection.
Conclusion
Cloud bursting enables dynamic scaling of IT resources from on-prem servers to the public cloud for temporary peaks in traffic or users or for heavy computation. For it to be successful, careful planning of network capacity is needed, and the myths of high latency, a lack of interoperability, and crippling cloud egress costs need busting.
A key benefit of cloud bursting is that organizations only pay for the additional resources that are actually used during peak periods. The high latency often feared when connecting to the public cloud can be mitigated by using the private connectivity solutions offered by the various cloud service providers, and these can be accessed directly, for example, via a cloud exchange. Cloud egress costs, the charges for taking data back out of the cloud, are also significantly reduced through such private connectivity solutions. In addition, a cloud routing service eliminates the challenge of interoperability between private and public clouds at the network layer.
In summary, cloud bursting is a valuable solution for organizations that need flexible resource scaling for peak loads without exploding the budget. It enables optimal use of cloud resources as needed, and access to sensitive locally stored data in accordance with security and compliance policies. It simplifies the management of sudden increases in customer demand, the stress-testing of new software before rollout, the periodic analysis of sensitive data, and the operationalized training of Machine Learning models. Overcoming the myths around cloud bursting opens the door for organizations to reap the rewards of the cloud – as long as they prioritize appropriately dimensioned direct and private connectivity solutions, shielded from the public Internet.