Infrastructure

12:09 PM
Connect Directly
RSS
E-Mail
50%
50%

InfiniBand Marries Ethernet in New Network Spec

Offspring said to be 40 gigabit per second speeds and low latency, according to the InfiniBand trade association.

RDMA over 10 gigabit Ethernet has been a popular concept in the financial services industry. Remote direct memory access, or RDMA, allows data to move directly from the memory of one computer into that of another without involving the operating system, allowing high-throughput, low-latency networking. RDMA over 10 gigabit Ethernet in a data center typically involves using 10 gigabit Ethernet networking technology to provide both storage and networking interconnections, promising savings through consolidation.

Today, the InfiniBand Trade Association released a specification that mimics the performance of InfiniBand networks (currently used within many data covers) over more standard Ethernet networks by prioritizing traffic. The spec is called RDMA over Converged Ethernet, pronounced "Rocky."

Why would anyone want to do this? Why not just run RDMA over 10 gig E?

"This spec is all about choice," says Sujal Das, senior director of product management at Mellanox, who says the new spec brings customers the best of both worlds (InfiniBand and Ethernet) and lets customers move data at a rate of 40 gigabits per second. This could be useful for banks performing risk analyses and stress tests, as well as to improve overall performance in virtual server and desktop environments and cloud computing.

Some software based on the new spec is already available in the OpenFabrics Enterprise Distribution (OFED) 1.5.1. More will be rolled out throughout this year.

Comment  | 
Print  | 
More Insights
Register for Bank Systems & Technology Newsletters
White Papers
Current Issue
Bank Systems & Technology - August 2014
Modern core systems are emerging as the foundations of effective channel integration and customer engagement initiatives.
Slideshows
Video