Infiniband: Difference between revisions
imported>Howard C. Berkowitz No edit summary |
imported>Howard C. Berkowitz No edit summary |
||
Line 1: | Line 1: | ||
{{subpages}} | {{subpages}} | ||
{{TOC|right}} | {{TOC|right}} | ||
'''InfiniBand''', also called System I/O, is a point-to-point bidirectional serial link that, in [[storage area network]]s, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis. | '''InfiniBand''', also called System I/O, is a point-to-point bidirectional serial link that, in [[storage area network]]s, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis. Modern InfiniBand specifications, however, also specifies software functionality for [[routing]] and for [[End-to-end protocolsl]]. | ||
It supports signaling rates of 10, 20, and 40 Gbps, and, as with [[PCI Express]], links can be put in parallel for additional bandwidth. For physical transmission, it supports board-level connection, both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km)<ref>{{citation | It supports signaling rates of 10, 20, and 40 Gbps, and, as with [[PCI Express]], links can be put in parallel for additional bandwidth. For physical transmission, it supports board-level connection, both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km)<ref>{{citation | ||
Line 8: | Line 8: | ||
| title = Specification FAQ}}</ref> | | title = Specification FAQ}}</ref> | ||
The InfiniBand Trade | The InfiniBand Trade Association (ITA) sees it as complementary to Ethernet and [[Fibre Channel]] technologies, which they see as appropriate to feed an InfiniBand core switching fabric. It has lower latency than an [[IEEE 802.3|Ethernet]] of the same signaling speed. Case-by-case analysis, however, will is required to tell if InfiniBand of a 10 Gbps signaling rate, for example, is more cost-effective than a 40 Gbps Ethernet fabric. | ||
ITA recently released the '''Remote Direct Memory Access (RDMA) over Converged Ethernet''' (RoCE, pronounced Rocky) [[high performance computing]] (HPC) architecture, which layers Infiniband on top of the physical and data link layers of IEEE 802.3, but replaces the TCP/IP end-to-end and routing protocols with their InfiniBand equivalents. With the advent of 40 and 100 Gbps Ethernet, the idea of running different middle-layer protocols over a common low-level architecture has attractions. Vendors implementing these protocols in dedicated processor software expect to see latencies of 7-10 microseconds, while pure hardware vendor Mellanox predicts they can achieve 1.3 microseconds.<ref>{{citation | |||
| journal = EE Times | |||
| title = New converged network blends Ethernet, Infiniband | |||
| author = Rick Merritt | |||
| date = 19 April 2010 | |||
| url = http://www.eetimes.com/electronics-news/4088625/New-converged-network-blends-Ethernet-Infiniband}}</ref> | |||
==History== | ==History== | ||
It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun. | It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun. | ||
InfiniBand was initially deployed as a | InfiniBand was initially deployed as a HPC interconnect, bur it was always envisioned as a "system area network", interconnecting computers, network devices, and storage arrays in data centers. | ||
==Software== | ==Software== | ||
The [[Open Fabrics Alliance]] has specified software stacks for it. Some vendors say they can achieve better performance, in the highly specialized supercomputing environment, by using proprietary extensions. <ref>{{citation | The [[Open Fabrics Alliance]] has specified software stacks for it. Some vendors say they can achieve better performance, in the highly specialized supercomputing environment, by using proprietary extensions. <ref>{{citation | ||
Line 22: | Line 29: | ||
For HPC clusters, [[Fat Tree]] is the most common topology, but others use torus or mesh topologies, especially when interconnecting thousands of processors. | For HPC clusters, [[Fat Tree]] is the most common topology, but others use torus or mesh topologies, especially when interconnecting thousands of processors. | ||
==References== | ==References== | ||
{{reflist}} | {{reflist|2}} |
Revision as of 13:40, 28 July 2010
InfiniBand, also called System I/O, is a point-to-point bidirectional serial link that, in storage area networks, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis. Modern InfiniBand specifications, however, also specifies software functionality for routing and for End-to-end protocolsl.
It supports signaling rates of 10, 20, and 40 Gbps, and, as with PCI Express, links can be put in parallel for additional bandwidth. For physical transmission, it supports board-level connection, both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km)[1]
The InfiniBand Trade Association (ITA) sees it as complementary to Ethernet and Fibre Channel technologies, which they see as appropriate to feed an InfiniBand core switching fabric. It has lower latency than an Ethernet of the same signaling speed. Case-by-case analysis, however, will is required to tell if InfiniBand of a 10 Gbps signaling rate, for example, is more cost-effective than a 40 Gbps Ethernet fabric.
ITA recently released the Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE, pronounced Rocky) high performance computing (HPC) architecture, which layers Infiniband on top of the physical and data link layers of IEEE 802.3, but replaces the TCP/IP end-to-end and routing protocols with their InfiniBand equivalents. With the advent of 40 and 100 Gbps Ethernet, the idea of running different middle-layer protocols over a common low-level architecture has attractions. Vendors implementing these protocols in dedicated processor software expect to see latencies of 7-10 microseconds, while pure hardware vendor Mellanox predicts they can achieve 1.3 microseconds.[2]
History
It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun.
InfiniBand was initially deployed as a HPC interconnect, bur it was always envisioned as a "system area network", interconnecting computers, network devices, and storage arrays in data centers.
Software
The Open Fabrics Alliance has specified software stacks for it. Some vendors say they can achieve better performance, in the highly specialized supercomputing environment, by using proprietary extensions. [3]
Topologies
For HPC clusters, Fat Tree is the most common topology, but others use torus or mesh topologies, especially when interconnecting thousands of processors.
References
- ↑ Specification FAQ, InfiniBand Trade Association
- ↑ Rick Merritt (19 April 2010), "New converged network blends Ethernet, Infiniband", EE Times
- ↑ "3 Questions: David Smith on InfiniBand", Supercomputing Online