As interconnect technologies, InfiniBand and Ethernet are different, and it is difficult to say which one is better. They continue to develop in different application fields and become two indispensable interconnect technologies in the network world.
InfiniBand vs. Ethernet: What are they?
InfiniBand Networking
From a design perspective, there are big differences between InfiniBand and Ethernet. As a network interconnection technology, InfiniBand is widely used in supercomputer clusters due to its high reliability, low latency, and high bandwidth. In addition, with the development of high-performance computing, InfiniBand has also become an ideal choice for GPU server network interconnection technology.
To achieve the original data transmission rate of 10Gbit/s on 4X cables, the InfiniBand standard allows the transmission of single data rate (SDR) signals at a basic rate of 2.5Gbit/s on each channel. A single channel can be expanded to 5Gbit/s and 10Gbit/s respectively, and the maximum data transmission rate is 40Gbit/s on 4X cables and 120Gbit/s on 12X cables, thus enabling InfiniBand networks to have double data rate (DDR) and quadruple data rate (QDR) signals.
Ethernet Network
Since its introduction on September 30, 1980, the Ethernet standard has become a widely used communication protocol in local area networks. Unlike InfiniBand, Ethernet was designed with the following goals in mind: How to easily transmit information between multiple systems? It is a typical network designed for distribution and compatibility. Traditional Ethernet mainly uses TCP/IP to build networks and gradually develops into RoCE.
Generally speaking, Ethernet networks are mainly used to connect multiple computers or other devices (such as printers, scanners, etc.) to a local area network. It can connect an Ethernet network to a wired network through fiber optic cables, or it can connect an Ethernet network to a wireless network through wireless networking technology. Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, and Switched Ethernet are all major Ethernet types.
InfiniBand vs. Ethernet: What are the differences between them?
Cluster data transmission bottlenecks in high-performance computing scenarios are the original design target of InfiniBand, and it has become an interconnection standard that meets the requirements of the times. Therefore, there are many differences between InfiniBand and Ethernet in terms of bandwidth, latency, network reliability, network technology, and application scenarios.
Bandwidth
Since the advent of InfiniBand, the development speed of InfiniBand network has been faster than Ethernet. The main reason is that InfiniBand is used for interconnection between servers in high-performance computing and to reduce CPU load. However, Ethernet is more oriented to the interconnection of terminal devices and does not require too much bandwidth.
For high-speed network traffic exceeding 10G, if all data packets need to be unpacked, a lot of resources will be consumed. The first generation of SDR InfiniBand runs at 10Gbps, which not only increases data transmission bandwidth and reduces CPU load, but also allows high-speed network transmission to offload CPU and increase network utilization.
Network latency
In terms of network latency, InfiniBand and Ethernet also perform very differently. Ethernet switches typically use store-and-forward and MAC table lookup addressing as the second layer technology in the network transmission model. The processing flow of Ethernet switches must take into account complex services such as IP, MPLS, and QinQ, which takes a long time.
On the other hand, for InfiniBand switches, layer 2 processing is very simple. Only the 16-bit LID can be used to search for forwarding path information. At the same time, Cut-Through technology is used for parallel processing to significantly shorten the forwarding delay to less than 100ns, which is significantly faster than Ethernet switches.
Network reliability
Since packet loss and retransmission affect the overall performance of high-performance computing, a highly reliable network protocol is needed to ensure that the network has no packet loss at the mechanism level and achieve its high reliability. InfiniBand builds a complete network protocol through a customized first to fourth layer format. End-to-end flow control is the basis for sending and receiving packets in the InfiniBand network, which can achieve a packet loss-free network.
Compared with InfiniBand, Ethernet networks do not have a scheduling-based flow control mechanism. Therefore, there is no guarantee whether congestion will occur at the peer end when sending packets. In order to absorb the sudden increase in instantaneous traffic in the network, tens of MB of cache space need to be opened in the switch to temporarily store these messages, which will occupy chip resources. This means that the chip area of an Ethernet switch with the same specifications is significantly larger than that of an InfiniBand chip, which not only costs more, but also consumes more power.
Network mode
In terms of network mode, InfiniBand network is easier to manage than Ethernet network. The logic of SDN has been designed and implemented in InfiniBand. Each InfiniBand second-layer network will have a subnet manager to configure the ID of the network node, calculate the forwarding path information uniformly through the control plane, and issue it to the InfiniBand switch. In order to complete the network configuration, such a second-layer network must be configured without any configuration.
Ethernet networks use a network mode that automatically generates MAC tables, and IP must be used in conjunction with the ARP protocol. In addition, each server in the network must send data packets regularly to ensure that entries are updated in real time. In order to divide virtual networks and limit their size, the VLAN mechanism must be implemented. However, since the Ethernet network itself lacks an entry learning mechanism, it will lead to a loop network. To avoid loops in the network forwarding path, protocols such as STP must be implemented, which increases the complexity of network configuration.
Application scenarios
Due to its high bandwidth, low latency, and optimized support for parallel computing, InfiniBand is widely used in high-performance computing (HPC) environments. It is designed to handle the communication needs of HPC clusters, where large-scale data processing and frequent inter-node communication are critical. Ethernet is commonly used in enterprise networks, Internet access, and home networks, and its main advantages are low cost, standardization, and wide support.
In recent years, the demand for large-scale computing power has surged, driving the need for high-speed communication within machines and low-latency, high-bandwidth communication between machines in ultra-large-scale clusters. According to user statistics of the Top500 supercomputing centers, IB networks play a key role in the top 10 and top 100 centers.
Choose the right InfiniBand product
From the above comparison between InfiniBand and Ethernet, the advantages of InfiniBand network are very prominent. The rapid iteration of InfiniBand network, from SDR 10Gbps, DDR 20Gbps, QDR 40Gps, FDR 56Gbps, EDR 100Gbps to today's 800Gbps InfiniBand, all benefit from RDMA technology.
3Coptics has launched many InfiniBand products, including InfiniBand optical modules & high-speed cables, InfiniBand network cards and InfiniBand switches.
InfiniBand optical modules & high-speed cables
3Coptics provides a wide range of 40G-200G InfiniBand optical modules & high-speed cables to enhance the efficient interconnection of computing and storage infrastructure.
InfiniBand NICs
3Coptics' InfiniBand NICs offer high-performance and flexible solutions designed to meet the growing demands of data center applications. In addition to all the features of the previous versions, the ConnectX-6 and ConnectX-7 NICs offer a range of enhancements to further improve performance and scalability.
Product | Rate | Host interface | Port |
MCX653105A-ECAT-SP | HDR and 100Gb/s | PCIe 4.0x16 | Single port |
MCX653106A-HDAT-SP | HDR and 200Gb/s | PCIe 4.0x16 | Dual Port |
MCX653106A-ECAT-SP | HDR and 100Gb/s | PCIe 4.0x16 | Dual Port |
MCX653105A-HDAT-SP | HDR and 200Gb/s | PCIe 4.0x16 | Single port |
MCX75510AAS-NEAT | NDR and 400Gb/s | PCIe 5.0x16 | Single port |
InfiniBand Switches
InfiniBand switches, NVIDIA Quantum/Quantum-2, provide high-speed interconnection up to 200Gb/s, 400Gb/s, ultra-low latency and scalability, accelerating research, innovation and product development for developers and scientific researchers.
Product | MQM8700-HS2F | MQM8790-HS2F | MQM9700-NS2F |
Product type | 40 x HDR QSFP56 | 40 x HDR QSFP56 | 64 x NDR 400G |
Function | Managed Switches | Unmanaged switches | Managed Switches |
Software | MLNX-OS | MLNX-OS | MLNX-OS |
AC Power | 1+1 hot swap | 1+1 hot swap | 1+1 hot swap |
Number of fans | N+1 hot-swap | N+1 hot-swap | 6+1 hot-swap |
Wind direction | Back-front | Back-front | Back-to-front (P2C) |
Conclusion There are applicable application scenarios between InfiniBand and Ethernet. Due to the significant increase in speed brought by the InfiniBand network, the CPU does not need to sacrifice more resources for network processing, thus improving network utilization, making the InfiniBand network the main network solution for the high-performance computing industry. InfiniBand products with 1600Gbps GDR and 3200Gbps LDR will also appear in the future. If there are no high requirements for communication latency between data center nodes and flexible access and expansion are more important, then you can choose an Ethernet network for the long term.