Artificial intelligence (AI) is already an integral part of our daily lives. And as engineers, we see diverse applications for AI that are dramatically changing multiple industries:
- The advent of autonomous vehicles has dramatically shifted the focus of the automotive industry. The AI technologies required for autonomous vehicles are largely ready for testing, though they also face legislative impediments.
- The healthcare industry has also embraced AI. Imagine a robot that could access not only a patient’s entire medical history, but also the latest medical research, in the blink of an eye. IBM’s Watson has already been trained to assist oncologists in the diagnosis and treatment of lung cancer.
- Across industries, AI-driven technologies like natural language processing (NLP), image processing and data structuring have supported innovation, and sophisticated data analytics are relevant in virtually every domain.
Machine learning is among the AI technologies with the greatest promise and relevance to CAE. Machine learning uses a set of training data to “teach” computer programs to do something they’re not explicitly programmed to do. In other words, the machine’s algorithms actually learn as they go, gaining analytical and predictive abilities.
However, machine learning takes time--and massive amounts of data. Artificial neural networks (ANNs) are making machine learning more effective. Modeled after the human brain, ANN’s are made of a network of nodes, similar to the neurons of the human brain and connected much like synapses, that is, based on the correlation of information they contain.
Making an ANN more efficient requires adding more layers of nodes, which essentially increases the ANN’s overall computing power. The layers of nodes between the input and output layers are known as hidden layers, and they are what makes deep learning possible.
Deep Learning for Steady-State Fluid Flow Prediction in the Advania Data Centers Cloud
In a recent case study, researchers applied deep learning to the complex task of computational fluid dynamics (CFD) simulations. Solving fluid flow problems using CFD demands not only extensive compute resources, but also time for running long simulations. Artificial neural networks (ANNs) can learn complex dependencies between high-dimensional variables, which makes them an appealing technology for researchers who take a data-driven approach to CFD.
In this case study, researchers applied an ANN to predict fluid flow, given only the shape of the object to be simulated. The goal of the study was to use ANN to solve fluid flow problems with significantly decreased time to solution (by the order of 1,000 times) , while maintaining the accuracy of a traditional CFD solver.
The figure above illustrates the difference between the ground truth flow field (left image) and the predicted flow field (right image) for one exemplary simulation sample after 300,000 training steps. |
Creating a large number of simulation samples is paramount to let the ANN learn the dependencies between a simulated design and the flow field around it. Cloud computing provides an excellent source for the additional resources needed to create these simulation samples--in a fraction of the time the samples could be created on a state-of-the-art desktop workstation. German-based Renumics GmbH partnered with UberCloud to explore whether using an UberCloud software container in the cloud to create simulation samples would improve the overall accuracy of the ANN.
Researchers used the open-source CFD code OpenFOam to perform the CFD simulations. Automatically creating the simulation samples took four steps:
- Random two-dimensional shapes were created. They had to be sufficiently diverse to let the neural network learn the dependencies between different kinds of shapes and their surrounding flow fields.
- Shapes were meshed and added to an OpenFOAM simulation case template. This template was simulated using the steady-state simpleFOAM.
- Simulation results were post-processed using open-source visualization tool ParaView. The flow fields were resampled on a rectangular regular grid to simplify information processing for the neural net.
- Both the simulated design and the flow fields were fed into the neural network’s input queue. After training, the neural network was able to infer a flow field merely from seeing the to-be-simulated design.
The figure above illustrates the steps for building the deep-learning workflow. |
The research team proved a mantra among machine learning engineers: The more data, the better:
- The proposed metrics for measuring the accuracy of the neural network predictions exhibited better accuracy for the larger number of samples.
- Using a large number of high-performance cloud computing nodes (working in parallel on the many samples) effectively compensated for the overhead required to create high volumes of additional samples.
- Compared to a state-of-the-art desktop workstation, the cloud-based approach was six times faster, creating tens of thousands of necessary samples in hours instead of days.
The team also concluded that training more complex models (e.g., for transient 3D flow models) will require much more data; software platforms for training data generation and management, as well as flexible compute infrastructure, will become increasingly important.