Oppgaven er ikke lenger tilgjengelig

Fighting the communication overhead in hybrid CPU-GPU computing

Tomorrow's supercomputers are projected to have a hybrid architecture. They will be clusters of fat nodes, each consisting of several sockets of multicore CPUs plus one or several accelerators (such as general-purpose GPUs). Programming of these hybrid supercomputers will be challenging, requiring a combination of programming paradigms. At the same time, one of the obstacles to good performance is the many types of data communication involved, more specifically, between the nodes, between the CPU and the accelerators inside each node.

Hybrid CPU-GPU programming incurs data communication overhead.

This master project aims to develop a simple-to-use library that helps the programmers to handle data communications that will arise from using a typical GPU-enhanced supercomputer. The usage scenario should cover both single-GPU nodes and multi-GPU nodes. Through using various communication latency hiding techniques, the newly developed library should contribute to improving the achievable performance on  such cutting-edge hybrid hardware.

What you will do:

  • Test out various techniques for hiding communication latency in the context of hybrid CPU-GPU computing
  • Develop a simple-to-use library for data communication that typically arises in scientific code
  • Verify the efficiency of the developed library by applying it to existing test cases

What you will learn:

  • Parallel programming techniques: MPI, OpenMP and CUDA
  • Communication latency hiding techniques
  • How to use cutting-edge supercomputers


  • Good (serial) programming skills
  • Entrance-level knowledge of numerical methods
  • A lot of courage and dedication
Emneord: MPI, OpenMP, CUDA programming, latency hiding
Publisert 3. okt. 2014 23:12 - Sist endret 3. okt. 2014 23:12

Omfang (studiepoeng)