Graphical Visualization on Computational Simulation Using Shared Memory

The Shared Memory technique is a powerful tool for parallelizing computer codes. In particular it can be used to visualize the results "on the fly" without stop running the simulation. In this presentation we discuss and show how to use the technique conjugated with a visualization code using openGL.


Introduction
The visualization is an interesting tool to help us to understand some aspects of simulations [1][2][3][4]. In many cases we want to see the result of the simulations on the fly, so that we can decide whether the written code is correct or whether the set of chosen parameters was adequate in the actual simulation. In practice, visualization on the fly requires more computational effort than the simulation itself. As an alternative, one can dedicate some computational effort for the visualization in a separated program in such way that we can turn on or turn off the visualization when we want to. This technique is considerably advantageous, for the processing time is spent only at the visualization process.
The parallelization of the code is the more efficient way to proceed in this case. There are many ways to parallelize the code, we want to emphasize two of them: Message Passing Interface (MPI) and Shared Memory [5][6][7][8].
In the MPI technique, a copy of the code is put at each memory of each cpu and messages are exchanged between cpus (using a network connection) to set what each cpu must to do in the code. At the beginning, MPI and its library implementations (OpenMPI, MPICH, etc.) were the most useful way to implement parallelization in computer programs.
In the Shared Memory, different parts of the program use the same memory area to exchange data. Although unlike in concept, the Shared Memory parallelization has become useful with the recent development of more powerful multi-core processors found in either current personal computers or workstations, in which some of them can have up to 64 cores. It is worth emphasizing that the algorithms and programming implementation using Shared Memory is easier than MPI. A schematic view of MPI and Shared Memory is displayed in Fig. (1).
Shared Memory is one of the simplest method of interprocess communication (IPC) and allows two or more processes to exchange data accessing the same area in memory. In addition, these communications use the bus of the computer chipset being the fastest way to parallelize tasks and to avoid copying data unnecessarily.   As an example we present in the next section a standard algorithm to implement the Shared Memory in a computational simulation using Molecular Dynamics. In what follows, all the applications displayed use graphical visualizations in openGL for a molecular dynamics simulation of a magnetic liquid.

Shared Memory
To use the Shared Memory we need to allocate a memory segment. The function in C or C++ to allocate memory is called shmget ("SHared Memory GET"). Its first parameter is an integer that identifies which segment must be created. All processes can access the same memory segment by specifying this key. The second parameter specifies the number of bytes in memory segment. The third parameter is related to the flag values that specify several options to the shmget function. In the example below we show a C code fragment that creates a memory segment. Next we need to make the Shared Memory segment available. We must use shmat,("SHared Memory ATach"). This function uses as a first argument the identifier shmid returned by the shmget.The second argument is a pointer that specifies where inside our processes address we want to map the Shared Memory. It is easier to let the operational system decide what to do. This can be done only specifying it as NULL. The third argument is a flag. We show in the following the code to be used to attach the memory segment.   For pedagogical purposes, we employ this technique to visualize a Molecular Dynamics simulation (MD). To make a real time simulation we need to put all these functions with "create", "attach", "detach" and "destroy" Shared Memory segments into the simulation code. Typically we need to use these functions to create 3 arrays of double precision to store x, y and z, the spatial coordinates, and 3 arrays of double precision to store Vx, Vy and Vz for the velocities. Fig. (2) shows how the MD simulation and Visualization works with a Shared Memory.
In the visualization program we use the same code as before, but in the third parameter of shmget we use a flag "read only" to prevent the visualization program to change any value in the arrays of the positions or the velocities. To visualize the MD simulation we use the freeglut library in openGL [10]. In Fig. (3) we display some configurations after the equilibration of the system for temperatures varying from T = 0.01 to T = 1.0. Here we use reduced units [9] with the temperature given in units of ε/k B ≈ 120 K and distances in units of σ, where ε and σ are the Lennard-Jones parameters and k B the Boltzmann constant.
In a system with a considerable number of particles it is important to optimize the computational effort during the calculations. In a multi-core architecture the operational system does the tough task of separating the jobs between the cores. This does not forbid us to use other parallel schemes like the multithread technique, but this is out of scope of this work. With the help of the Shared Memory scheme we are able to switch on all the visualizations we want to see, separated from the computation of the MD simulations. As mentioned before we need only to enumerate the Shared Memory for each calculation in both the MD simulation and the openGL visualization programs without any interference between each other.
To illustrate the application we plot some radial distribution functions for the visualised configurations in Fig (4). The formation of the clusters can be seen as the temperature is decreased. However, in Fig. (3) we can see not only the clusters but also the structure formed in real time. The advantage of visualizing the simulations becomes more apparent in systems with magnetic properties like magnetic fluids [9]. Some characteristics are intrinsic for these systems like domain walls and vortex patterns in which the visualization is better suited.

Conclusions
The Shared Memory technique for the visualization of the simulations "on the fly" was presented. This technique for parallelization of the computation and the visualization of the simulations became accessible with the development of the multi-core technology as well as the expansion of the RAM memory. With the help of the openGL library the simulations on particle systems can be easily visualized. Consequently, the study of some physical phenomena like cluster formation, domain walls and vortex patterns in magnetic systems get another dimension. This integration between science and visualization is of paramount importance, either as an efficient debugger or as a tool to understand the simulated physical model.