In-memory computing on hardware-supported large shared-memory systems

The overall objective of this master project is to investigate different parallel programming paradigms/techniques for efficient usage of hardware-supported, distributed shared-memory systems.

This diagram shows the general concept of distributed shared memory (source: Wikipedia

Introduction

Large-scale parallel computers are nowadays built as clusters of stand-alone machines, where each machine consists of multiple processors with common access to a tightly-coupled shared memory space. However, the shared memory spaces belonging to different stand-alone machines are only connected through a relatively slow interconnect, thus providing in effect a distributed memory layout overall. Writing parallel programs to use a distributed-memory system is possible, but it requires complex programming techniques and intricate details. On the other hand, if a parallel system has an overall shared-memory layout (supported by either software or hardware), the parallel programming task will be dramatically simplified. It can however be a challenge to achieve good running efficiency of such shared-memory based programs.

The project

The candidate is expected to carry out a detailed study of in-memory computing that makes use of hardware-supported distributed shared-memory systems. (One such system is hosted at UiO.) The target types of computation include high-performance scientific simulation and graph-based data processing. The former has a relatively structured pattern of memory access and inter-processor communication (behind the scene), whereas the latter can have a very irregular pattern with respect to both memory access and communication. In addition to existing benchmark programs, the candidate will develop new micro-benchmark programs that specifically test the "data bridge" between the stand-alone machines. Different shared-memory programming standards (such as OpenMP, P-threads, Intel-TBB) will also be examined with respect to such in-memory computations. Additional research topics include data partitioning and work task scheduling. Performance profiling/diagnostic tools will be used to obtain valuable insights.

Learning outcome

The candidate will become fluent in advanced shared-memory programming. The candidate will also become an expert on using hardware-supported large shared-memory systems for in-memory computing and data processing, which are expected as important technical ingredients for handling tomorrow's big-data challenges. Another learning outcome is the expertise on using modern performance profiling/diagnostic tools. 

Qualifications required

The candidate is expected to be skilful in technical programming (experience with parallel programming is not required, but preferred). Very important: The candidate must be hard-working and eager to learn new skills and knowledge.

Emneord: Distributed shared memory, in-memory computing, parallel programming
Publisert 21. sep. 2018 14:09 - Sist endret 21. sep. 2018 14:16

Veileder(e)

Omfang (studiepoeng)

60