Dolphin PCI Express Architecture

The Dolphin Express architecture uses PCI Express over cable to connect multiple computers in a switched network. Through the device lending paradigm, we are able to expose remote hardware on local machines underneath the operating system. By all intents and purposes, this hardware becomes local hardware.

This makes it possible to develop an entirely new file system paradigm, where computers do not use high-level protocols to instruct a server to manipulate files and file structures, but where every computer can temporarily control the disk subsystem as if it were locally attached.

Dolphin IX PCI Express Adapter

 

The Dolphin Express IX is based in a Generation 2 PCI Express non-transparent bridge, and provides 40 Gbits/s performance over a standard PCIe external cable, with application end-to-end latencies as low as 0.74 microseconds. Data transfers can be done either with Direct Memory Access or Programmed IO (PIO). There are several APIs for using these cards. One API is Dolphin Smart IO, which based on the Device Lending idea. Device Lending exploits the direct PCIe connection between computers and makes it possible to let hardware that is located physically in one computer appear to another computer as locally attached.

This opens up entirely new possibilities for remote operations.

One of these is a new concept for shared file systems. Rather than deploying a file server on the computer that physically attaches the disks, and similarly a network file system on all other computers, we can use the Device Lending idea to let the disks appear locally attached on several machines for a period of time.

The protocol overhead is then moved for overhead for every single command to initial overhead for virtually attaching the physically remote disk.

This thesis will look at two issues:

  • the complication of storing metadata for the file system on all computers that map a disk, and the synchronisation and cache coherency demands between these file systems
  • a file system that arranges disk blocks in a manner that is sensible for sharing across PCIe links in the proposed manner

To remove some complications, we will look exclusively at NVMe disks, which are SSDs that are directly connected to the PCIe bus.

  • Knowledge outcome: Optimization, performance analysis
  • Knowledge required: Low-level programming (C/C++), operating systems
Publisert 15. sep. 2013 12:00 - Sist endret 3. apr. 2019 09:00