InfiniBand

InfiniBand is an input/output (I/O) architecture and high-performance specification for data transmission between high-speed, low latency and highly-scalable CPUs, processors and storage.

InfiniBand uses a switched fabric network topology implementation, where devices are interconnected using one or more network switches. The overall throughput of this typology exceeds that of popular broadcast mediums, such as Ethernet.

The maximum speed is currently around 40 Gbits/s, but the system is layerable to provide higher speeds for supercomputer interconnectivity.

In 1999, InfiniBand was created as the merger of two competing standards – Future I/O and Next Generation I/O. InfiniBand has become a very popular choice for interconnectivity in high-performance computing machines.

InfiniBand’s connectivity model is derived from the mainframe computing domain, where dedicated channels are used to connect and transmit data between the mainframe and peripherals. InfiniBand implements point-to-point and bidirectional serial links, which can be aggregated in units of 4 (4X) and 12 (12X) to achieve combined useful data throughput rates of up to 300 gigabits per second with a maximum 4K packet size utilized throughout.

The OpenFabrics Alliance, which developed the de facto standard for InfiniBand’s software stack implementation, released the OpenFabrics Enterprise Distribution (OFED), which was adopted by most UNIX, Linux and Windows InfiniBand vendors.

Post a Comment

0 Comments