Simr Blog

The 7 Myths of Cloud for Engineers

Written by Wolfgang Gentzsch | Sep 27, 2017 3:52:41 AM

 

Over the past five years we have performed almost 200 cloud experiments together with engineers, running their technical computing applications on different clouds, and publishing 80 case studies. In the early days, each cloud based simulation experiment took three months on average, and had a 50% failure rate. Today, our cloud experiments take just a few days, and have a 0% failure rate. What has happened? Five years ago, cloud computing for compute-intensive engineering and scientific applications was in its infancy, and we faced several severe roadblocks. But over the years we have learned how to reduce or even remove them. And while the roadblocks were real five years ago, many of them have turned into myths, with the advent of new technologies, business models, and the growing acceptance of cloud computing. Let’s have a closer look.



1.     "Clouds are not secure"

This was the number one roadblock for many years, it is still stuck in the heads and minds of many users. But over the years, cloud providers have integrated sophisticated levels of security to protect their customers’ data and applications. Virtual private networks guaranty a secure link between user and cloud; and especially high performance computing (HPC) workloads are often running on dedicated servers which are ‘owned’ by the user for as long as he rented them, avoiding potential multi-tenancy threads. For security reasons, application installations are only carried out by badged experts, and computing resources and storage are safeguarded like Fort Knox. Any cloud provider who caused a security breach would face the risk to be out of business soon afterwards.

2.     "I have no control over my assets in the cloud"

In the early days of cloud computing, you handed over your application and data to the cloud provider not knowing how they handle it nor what the status of your compute (batch) jobs was. Today, many cloud providers are offering more transparency. And, with the advent of software container technology from Docker and UberCloud, additional functionality like collecting granular usage data, logs, monitoring, alerting, reporting, emailing, and interactivity are putting the user back in control.

3.     "Software licenses are not ready for the cloud"

Unfortunately, this is still true for some Independent Software Vendors (ISVs), while others are now adding flexible cloud-based licensing models for short-term usage, either as Bring-Your-Own-License (BYOL) or consumption based credits, pay per use. Still there are often hurdles which can cause headaches for the user. For example, some ISVs don’t allow upgrade of existing licenses (often limited by number of cores) to be able to run on a larger number of cores in the cloud. But with the increasing pressure from existing customers and from other software and support available in the cloud (like e.g. OpenFOAM for fluid dynamics or Code Aster for material analysis), ISVs might be more open to better serve their customers in this regard.

4.      "There is no portability among different clouds"

In the early days, on-boarding a cloud was painful. Done once, there was no time and resources to move to another cloud, even if you have bet on the wrong horse, e.g. because the cloud architecture was not right, your jobs didn’t scale across a large number of cores, and performance went down instead of up. Today, with a healthy competitive landscape of different providers and apps in the cloud, migrating from A to B is mostly straight forward, often with help from provider B. Especially containerized applications and workflows are fully portable among different Linux platforms.

5.      "Data transfer between the cloud and my desktop is slow"

Many applications produce gigabytes of results. Transferring that data from the cloud back to the end-user is often limited by the end-user’s last mile network. However, especially intermediate results can often stay in the cloud, and for checking e.g. the quality remote visualization is used, sending high-res graphics frames in real time back to the user. For the final data sets, there are technologies available which compress and encrypt the data, and stream it back to the user. And if all this doesn’t help, e.g. in case of terabytes of data, over-night FedEx will always help.

6.     " Cloud computing is more expensive than on-premise computing"

Total Cost of Ownership (TCO) studies show that only 10% to 20% of the cost of acquiring and running an HPC system over three years is the cost of hardware. The other 80% is the high cost of expertise, maintenance, training, and electricity. For a $200K 256-core system this can easily amount to $1 million over three years. Dividing by 3 years, 365 days, 24 hours, and 256 cores results in $0.15 per core per hour, for a fully (i.e. 100%) utilized system. In reality, especially in small and medium enterprises, HPC servers are often used less than 50% on average. Thus, the resulting cost is at least $0.30 per core per hour, while powerful HPC cloud cores today cost between $0.05 and $0.10 per core per hour. And while cloud providers refresh their systems every six months, you’re stuck with your on-premise system for at least three years.

7.      "Cloud-enabling my software can take weeks or even months"

This might still be true for complex software developed in-house over many years by many people. But today many applications are already in the cloud. Or you set up your compute environment in the cloud yourself and install the binaries. The good news is that there is now an elegant solution to resolve this hurdle as well: software containers from Docker and UberCloud. The UberCloud containers are especially well suited for engineering and scientific workloads. They come with dozens of additional HPC layers for: parallel computing (MPI and OpenMP), remote visualization, InfiniBand, secure communication, single-tenant ownership, license server, NFS, log monitoring, and more. All of this running on any Linux system; packaging once, running anywhere; available at your fingertips, within a second, in any private and public cloud.

 



Despite the continuous effort of lowering and even removing these 7 hurdles on our way to the cloud we still haven’t reached the final goal: the availability of computing as a utility, similar to water, gas, electricity, and telephony. But there are a number of trends that make me optimistic: digital natives are entering the business world; the trend of new and open source software; and a growing spectrum of affordable cloud-enabled software, on demand and pay per use. Over time there will be an increasing pressure on conservative market forces and a growing support for customers and user-friendly business models for mainstream cloud computing.  

Finally, I am very interested in hearing about your major hurdle(s) to move to the cloud; perhaps we can find a solution together ?