Home » Pica8 Deep Dive » From Infiniband to Ethernet
Jan
02

Two interesting news came out this week. Intel acquired the InfiniBand assets of QLogic , and IBM and NEC leverage OpenFlow for High Performance networking.

QLogic SilverStrom products

QLogic SilverStorm IB switches

Intel started InfiniBand (IB) standard in 1999, but withdrew from the market in 2002. After that, Mallenox took over the leadership of the InfiniBand industry and went public in 2007. Since then, the IB industry has been through significant consolidation. QLogic, before selling its IB assets to Intel, was the only player besides Mallenox in this market.

IB has long been the preferred interconnection technology of HPC (High Performance Computing). In 2011, IB is selected by 41.8% in the HPC Top500 list as the interconnection technology, closely next to Gigabit Ethernet (44.8% ).

The main reasons HPC perfers IB are latency and scalability. The latency advantage comes from two technology niches, RDMA support of the IB adapter cards and higher speed of IB switches. The scalability advantage of IB mainly comes from enabling multipathing through source routing.

In the last 18 months, the 10GE Ethernet is ramping up quickly and 40GE Ethernet switches are almost ready for deployment. We can foresee the switch latency of IB will not be a significant advantage anymore. On the scalability side, OpenFlow can pretty much enable the same source routing and multipathing on the Ethernet switches, just like the IBM and NEC announcement has covered . With faster Enthernet switches and higher scalability enabled by OpenFlow, I don’t think IB will have any more advantages on the switch side.

In that case, the only advantage IB has is the RDMA support at the IB adapters. Even though it requires special programming to enable these RMA features, the HPC industry apparently doesn’t mind the work. Many Ethernet NIC vendors have tried to replicate the same architecture, such as iWARP , over Ethernet, but none of them gained the popularity of IB. However, performance wise, iWARP (OFED/10GE) is not that far behind.

With fast dropping of 10GE and 40GE price, I believe it is just a matter of time HPC to migrate from IB to 10GE or 40GE Ethernet. That was probably why Mallenox started to support both IB and Ethernet on their adapter cards since 2009, and, maybe, that was why Intel acquired the IB assets from QLogic.

Will we to see a MPI/RMDA stack optimized for Intel’s 10GE NIC? I bet we will soon.

Comments are closed.

« « Why Xorp?