Whole Sun: Untangling the complex physical mechanisms behind our eruptive magnetic star and its twins

How does the Sun work? Why does it possess a magnetic cycle, dark spots and a dynamic hot atmosphere? These are questions that remain mostly unanswered. In the "Whole Sun" project, we aim at tackling these key questions as a coherent whole for the first time.

Image may contain: Water, Line, Graphic design.

Simulation of the solar dynamo that generates the magnetic field (left, simulation by the group in Saclay) and a simulation of the outer solar atmosphere (right, simulation by the group in Oslo) where the emerging field interacts with pre-existing field creating explosions and jets.

About the project

For too many years, the Sun has been split into internal and external solar physics topics and a global integrated view of its complex plasma dynamics has been lacking. For instance, dynamo simulations seeking to answer the origin of the magnetic field and of its cyclic behaviour neglect surface physics and the existence of sunspots, likewise surface models often assume the magnetic field as a given input without the detailed knowledge of the nonlinear interplay between convection, rotation and magnetic fields in the Sun’s outer envelope. The time has come to gather world leading European solar/stellar physicists to build a deeper understanding of our star and to extend it to its twins. To do so, many bottlenecks must be addressed: highly disparate spatial and temporal scales, physical interfaces of all solar layers, complex microphysics and global effects strong dynamics and how parameters such as star’s metallicity, mass and rotation influence the outcome. By gathering physicists from each side of the solar surface we aim at tackling these challenging, beyond the state-of-the-art, problems by developing a deep theoretical understanding of our star and its analogues and by building the most advanced multi-resolution solar code, in order to jointly address global/macrophysics and local/microphysics aspects of the solar dynamics. The advent of Exa-scale computers makes such a challenge within our reach, as do modern analysis methods to interpret observations and the 4-D data cubes that the project will produce or access. 


The objective is to determine over the next six years how the magnetic field is generated inside the Sun and how it creates solar spots on its surface and eruptions in its highly stratified atmosphere.

Competences and tools

Map showing the work-flow and vision behind the WHOLE SUN project.

The 4 PIs of the WHOLE SUN project consist of:

  • Sacha Brun (Commisariat à l’Energie Atomique et et aux Energies Alternatives, Saclay, France),
  • Laurent Gizon (Max Planck Institute for Solar System Research, Göttingen, Germany),
  • Vasilis Archontis (University of St Andrews, UK)
  • Mats Carlsson, (University of Oslo, Norway),

In addition, Fernando Moreno-Insertis is leading the work by the associated institution Instituto de Astrofísica de Canarias (IAC), Spain.

The team in Saclay are experts on dynamo simulations (how the solar magnetic field is generated); the team in Göttingen are experts on using helioseismology to get information from the interior of the Sun; the team in St Andrews specialises in the study of how magnetic flux emerges through the surface. The team at IAC has complementary competences in both observational solar physics and in modelling. Finally the members or RoCS are experts on the outer solar atmosphere (chromosphere and corona).

Global MHD simulations of the convective region of the Sun

In the Whole Sun ERC synergy project part of the ultimate goal is to develop a code capable of carrying out global simulations of the Sun. Here at RoCS we do that within the DISPATCH framework (Nordlund et al., 2018). We utilise the main benefits of DISPATCH, namely local timesteps, local MPI communications, and overlapping Cartesian patches, to set up and run simulations covering an unprecedented range of scales. 

Unique mesh decomposition

To simulate something in a sphere, one can use a number of different approaches. Most commons are the use of a spherical mesh, or using a so-called “star in a box”. The former has problems with singularities at the poles, while the latter has difficulties maintaining exactly spherical hydrostatic equilibrium.

To solve both problems while at the same time benefiting from cartesian arrangement for better vectorization and cache coherency, we use a unique, “Volleyball” decomposition, illustrated in Figure 1.  As on a volleyball, the surface is divided into six identical faces.  The faces are covered with small Cartesian patches, arranged along constant (local) latitudes.  In practice, each such patch is of order one or a few megameters at the surface, while deeper layers are constructed with gradually increasing patch size.  Each of the patches are then subdivided into for example 243 cells, making up the individual, and loosely coupled MHD-tasks to update.   The updates can in principle be performed with any MHD-solver, but in order to minimise artificial numerical diffusion of entropy we use a newly developed (HLLS) Riemann solver, analogous to the common HLLD solver, but based on entropy rather than total energy.

Figure 1. Volleyball decomposition. Left - low resolution single layer of the mesh; Right - zoom in at the seams of the Volleyball, showing three of the six faces.
Figure 1. Volleyball decomposition. Left - low resolution single layer of the mesh; Right - zoom in at the seams of the Volleyball, showing three of the six faces.

As illustrated by Figure 2, the volleyball arrangement results in neighbouring patches being displaced, and slightly tilted relative to each other.  Guard zone values may nevertheless be found by interpolation (in time and space) from neighbouring patches, and the simulation can proceed with each patch choosing a time step limited only by internal values of velocities and fast mode speeds.


Figure 2:  Closeup of patches, showing how patches are arranged in rows parallel to local latitudes (vertical), causing a slight displacement between different such rows, while subsequent depth-layers in general have more general displacements. 

The simulations

The set up currently spans 0.655-0.995 R, while in the future we will be able to include the solar surface.  We start with a hydrostatic equilibrium and let it relax to a steady-state convection. This relaxation period is performed with approximately 200,000 patches, with a highest resolution near the surface of approximately 500 kilometres. After the relaxation is complete we refine several upper layers of the atmosphere (figure 3) by adding three levels of factor 2 refinement, to reach about 75km resolution near the surface -- only a factor of about 2-3 below a resolution sufficient to include the solar photosphere. This increases the number of patches in the simulation to about 4.5 million, containing about 70 billion cells.

Figure 3. An example of mesh refinement close to the surface. Here, two additional levels are added.

Supercomputers used, and perspectives

The relaxation runs are carried out on Norway’s largest supercomputer Betzy as well as on the world’s 3rd largest supercomputer; LUMI in Finland. On LUMI-C we ran the relaxation simulation using 192 nodes (each node contains 128 cores, so it total 24576 cores). With this size, in a 48 hour allocation period the relaxation simulation evolves 48.3 hours, so we can run in real Solar-time.  The ideal allocation for the refined production run would be about 910 nodes (116480 cores).

Utilising a similar zoom-in technique (regridding / refining in space and time) as is commonly used in simulations of star formation (e.g. Padoan et al 2018-22), we will be able to use these simulations to for example model solar active region behaviour over a range of scales, without having to impose artificial initial and boundary conditions.

Figure 4: Entropy after 40h of solar time. Relaxation run
Figure 4: Entropy after 40h of solar time. Relaxation run.


Figure 5: Magnetic field in the simulation is still undergoing relaxation.
Figure 5: Magnetic field in the simulation is still undergoing relaxation.

Project period

Start - finish: 2019 - 2024


This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 - EXCELLENT SCIENCE (Grant agreement ID: 810218).

Published Nov. 14, 2019 12:05 PM - Last modified Nov. 15, 2022 1:45 PM