Computing and Software at the Exascale

Computing and software are crucial parts of the LHC physics experiments. The NorduGrid Advanced Resource Connector (ARC) middleware increases in popularity due its simplistic design and ease of deployment. This makes it the preferred choice of middleware for new and many existing sites particularly in Europe and Asia. ARC and its Control Tower allow seamless access to heterogeneous resources: Grid, High Performance Computers and Clouds. Moreover, ATLAS@home, based on BOINC and ARC, allow to access opportunistic resources made of personal computers.

The requirements imposed on software during the coming LHC runs will be as stringent as those on the computing resources. The data throughput that will have to be achieved exceeds anything that our community has managed to date. Such performance can only be attained by combining a number of techniques - multi-threading and parallel processing of events - as well as novel algorithms and optimization of existing software.

The importance of multi-variate analysis or "Machine Learning” in High Energy Physics continues to increase, for applications as diverse as reconstruction, physics analysis, data quality monitoring and distributed computing.

We propose master thesis subjects on development of new computing software tools, distributed data management systems, and data models that will address the challenges of future LHC extreme conditions. This involves various aspects of software and algorithm development including modern techniques making use of machine learning and anomaly detection.

Emneord: NorduGrid, Grid data management, grid computing, LHC, machines learning, middleware
Publisert 15. mars 2021 14:19 - Sist endret 15. mars 2021 14:19

Veileder(e)

Omfang (studiepoeng)

60