Code_Aster and MPI parallelism on Windows

The Code_Aster for Windows binary package available for free in our download page does not include MPI parallelism. For those interested in performance, we build a premium version, with powerful MPI parallelism and MUMPS 5.1.1, that aims to better exploit multi-core modern processors.

We present here performance comparison and time gain by increasing number of processes allocated to a run. We also show how to run a parallel case from AsterStudy module.

MPI Performances, linear static example

Here, testcase perf009 is used to evaluate performances on Code_Aster version 13.4. The mesh contains 261520 nodes and the mechanical model 803352 degrees of freedom. We study resolution time, using MUMPS, during the static linear computation having a single iteration.

Code_Aster mesh for performance case perf009 Code_Aster result for performance case perf009

Code_Aster performance comparison with MUMPS and MPI parallelism on Linux and WindowsSince resolution is the most consuming stage, similar gain is expected on industrial non-linear studies with multiple iterations. We evaluate the resolution time on a Core 2 Quad Q6600, with number of MPI processes (np) set to 1, 2 and 4, but it is possible to increase the number of processes even more if your CPU configuration makes it possible. We also compare results for different platforms.

As expected, on both platforms, resolution time is reduced as the number of processes increase. One can notice that better performance is obtained on a native Linux, but our native Code_Aster Windows performs better than Code_Aster running on a Linux virtual machine on Windows host.

Concerning memory consumption, it is obviously the same on all the platforms, without any side effect on elapsed time as long as there is enough RAM allocated to the virtual machine to avoid swapping.

Note : Code_Aster is officially distributed without MPI parallelism at the time being. Even for Linux, a parallel version should be build by yourself or provided by a third party.

Running a parallel case in AsterStudy on Windows

Code_Aster for Windows premium has been validated in the same way as the Code_Aster community version available for free in our download page. The premium version can run these extra MUMPS parallel testcases:

mumps01b mumps02b mumps04b sdll11j
ssll106g ssll117h sslp100d ssnl101b
ssnl125b ssnp124e ssnp147a ssnp152b
ssnp153a ssnp15f ssns115b ssnv104l
ssnv104n ssnv126e ssnv129f ssnv173l
ssnv503g supv003b supv003e wtnv100d
zzzz159f zzzz265b zzzz307b zzzz337a

After Code_Aster community being replaced by the premium version in a Salome-Meca Windows installation, it is possible to graphically run these parallel tests in AsterStudy module. On the History View tab, select in the menu, Operations -> Import a testcase

Import a parallel MPI testcase with MUMPS in Salome-Meca AsterStudy module

Let’s for example fill in the lineedit with “mumps01b” and click on the OK button.

Choose a parallel MPI testcase with MUMPS in Salome-Meca AsterStudy module

Before to run, the advanced tab permit to set up “Number of MPI CPU”. It is set to 2 for this case.

Choose the number of processes for a parallel MPI case with MUMPS in Salome-Meca AsterStudy module

AsterStudy has launched Code_Aster Windows premium. We can wait for the job to end.

Code_Aster for Windows premium is running a parallel MPI case with MUPMS via Salome-Meca AsterStudy module

After the run succeed, the export and message shows that number of MPI processes used to run this case is 2.

 Message and export output the number of MPI processes used by Code_Aster for Windows premium launch in Salome-Meca AsterStudy module

Conclusion

The Code_Aster for Windows premium version focus on performance and time gain, especially if you have large studies to run and good CPU configurations. Parallel run is easy to use and to manage graphically as the export file is automatically generated at launch by AsterStudy.

4 thoughts on “Code_Aster and MPI parallelism on Windows

  1. Nice work. What about the efficiency of the OpenMP parallelism used by MULT_FRONT under your Windows build (available in the free version)?

    Like

Leave a comment