Simr Blog

How Cloud-based Simulations Increase Computational Speed and Reliability [ABB Case Study]

Written by UberCloud | May 14, 2019 5:32:44 PM

Commonly used in commercial, utility and industrial applications, dry-type transformers are both easier to install and more environmentally friendly than liquid-filled transformers. Since they don’t contain oil, dry-type transformers require no fire-proof vaults, catch basins or venting of toxic gases. They can also be placed indoors, closer to loads, making entire electrical systems more efficient.


Dry-type transformers are frequently used in data centers, industrial plants and other critical facilities, where uptime is essential. This means that engineers must balance system reliability with cost. Because dry-type transformers are larger than liquid-filled ones, it is a high priority for transformer manufacturers to design smaller dry-type transformers to reduce materials costs, while still delivering sufficient dielectric insulation and cooling capacity.

 

OpenFOAM Modelling and Product Optimization of Dry-type Transformers in the Cloud

For this case study, UberCloud partnered with ABB and Microsoft Azure. The research team used  the open-source CFD package OpenFOAM to simulate the heat transfer of a dry-type transformer unit with varying dimensions. This approach allowed the team to evaluate and compare temperature increases and optimize the transformer design for thermal performance. They had two primary goals for the project:

 

  • Greater computation speed: Project teams are less experienced, so removing technical complexity is essential to ensure repeatable success in high-performance technical computing. Design optimizations require multiple simulation cases, often causing a bottleneck due to insufficient computation speed. For special cases, time-transient models may be necessary, which take even more computational effort to accomplish.
  • Repeatability: Many project teams are less experienced, so removing technical complexity is essential to ensure repeatable success in high-performance technical computing. The team sought to create a user-friendly HPC environment where end users would not be bothered with how the on-demand compute nodes worked.

 

The CFD model was built in 3D. Although only one-quarter of the geometry was taken into account (because of geometry symmetry), millions of computational cells were still generated. This is why a cloud-based computational platform represented an option to refine the mesh and to speed up the entire evaluation cycle.

The figure above illustrates the two cases simulated. The right-hand size case has lower dimension than the left-hand side one. The blue color part is iron core and the red color part is high-voltage coils.

 

The team chose UberCloud’s OpenFOAM containers to complete the simulations. These containers offer several benefits:

 

  • The independent software vendor (ISV) or open-source tools are pre-installed, configured and tested. They are ready to execute in an instant with no need for complex OS commands. End users can access tools directly through their browser and a familiar desktop screen.
  • UberCloud Container technologies allows a wide variety and selection for engineers because they are portable from server to server, cloud to cloud. Cloud operators or IT departments no longer need to limit variety, since they no longer have to install, tune or maintain the underlying software on a new system.
  • This technology also provides hardware abstraction, where the container is not tightly coupled with the server; the container and the software inside isn’t installed on the server in the traditional sense. Abstraction between the hardware and software stacks provides the ease of use and agility that bare metal environments often lack.

 

The research team found that the cloud-based technology offered significantly higher computational speed, making parametric studies and optimization of transformer designs much faster.

 

The team also enjoyed an HPC environment free of technical complexity. While this is often obtained using consultants, in this case, the setup was automated and launched with a single command for all ten compute nodes. End users could bring data in, run their workload, visualize results and finally move data back to their workstation, with no training, help pages or installation manuals. Thus the entire process was repeatable; any OpenFOAM user could ask for the same infrastructure and use it within hours.