What are you looking for ?
Advertise with us
PNY

OCP Global Summit 2024: Credo Showcasing Datacenter AI, Compute and CXL with XConn PCIe and CXL Switches

PCIe6.0 retimer, and 1Tb OSFP-XD PCIe 6 (16x64Gb) Active Electrical Cable (AEC), 800G sub-10W OSFP optical modules with Linear Receive Optics capability interoperating with 51T switches and standard DSP modules

Credo Technology Group Holding Ltd. announces its participation in the OCP Global Summit, October 15-17, 2024, in San Jose, CA.

Credp Pcie Connections Ocp24

The event will provide company with a platform to showcase GenAI, general compute and operator focused connectivity solutions and include multiple presentations by the firm’s executives.

The company will demonstrate Toucan, its PCIe 6.0 retimer, and 1Tb OSFP-XD PCIe 6 (16x64Gb) Active Electrical Cable (AEC). In addition, the firm will display its 800G sub-10W OSFP optical modules with Linear Receive Optics (LRO) capability interoperating with 51T switches and standard DSP modules.

In the OCP Innovation Village, the company is working with AMD, Gigabyte, MemVerge, MSI, Penguin Solutions, Rittal, Smart Modular Technologies, and XConn Technologies to show live demonstrations of PCIe and Compute Express Link (CXL) interconnect. Additionally, the solution providers will showcase how rack power/density increases as liquid cooling technology penetrates the data center.

The 1st live demonstration of a rack-scale shared H100 GPU will consist of an AMD EPYC server connected to an XConn PCIe 5 switch via Credo OSFP-XD PCIe AECs, with the XConn switch further driving 2 chassis of Nvidia H100 GPUs.

In the 2nd live demonstration, a rack-scale CXL2.0 shared memory system will be demonstrated with Memory Machine X software from MemVerge showing EPYC 9005 servers connected to an XConn CXL switch via Credo CXL AECs, the XConn CXL switch connecting to 2 chassis full of CXL memory – one based on CEM AIC form factor from Smart Modular – one based on the E3 form factor from Micron. This will enable the servers to fully access and share the CXL memory using the CXL.mem protocol.

In the 3rd showcase, a series of 3 AI GPU racks will illustrate the impact of liquid cooling on racking and network configurations. A 10kW air cooled rack, a 50kW air cooled rack, and a 120kW liquid cooled rack, all based on the Open Rack v3 (ORv3) standard with the Rittal liquid cooling plenum attached, and a full set of networking interconnect based on Credo’s AECs and optical devices, will be connected to support the front-end, scaleout and scaleup Networks necessary for these advanced racks.

Credo is pleased to be part of the OCP Global Summit, the leading event for showcasing the technologies designed to address increasing data infrastructure demands,” said Don Barnetson, VP, product for PCIe/CXL, Credo. “The new Credo PCIe6 and CXL solutions, including the Toucan retimer and OSFP-XD Active Electrical Cables, are designed to revolutionize connectivity for next generation data center and GPU designs and provide our customers with the tools to achieve enhanced performance and efficiency.”

Comments from other innovation center participants:
As AI and high-performance computing workloads become more complex, the demand for scalable, memory-centric infrastructure is growing exponentially,” said Gerry Fan, CEO, XConn Technologies. “Our XConn Apollo switch is designed to meet this demand head-on by enabling seamless integration of both PCIe and CXL in a single solution, offering unparalleled flexibility and performance for system designers. Our partnership with Credo is particularly valuable, as their advanced connectivity solutions are critical in driving the low-latency, high-bandwidth connectivity required to unlock the full potential of our switch technology. Together, our live demonstrations at the OCP Innovation Village will highlight the transformative potential of this collaboration, from GPU sharing to memory pooling, setting the stage for the next generation of data center architectures.

Our new 4- and 8-DIMM CXL add-in-cards make it incredibly easy for memory pooling appliances and CXL capable servers to expand server memory to handle the rapid increase in demand for in-memory databases, feature stores, as well as real-time data center and edge applications,” said Andy Mills, VP, advanced product development, Smart Modular Technologies.

Today we are demonstrating how exceeding the typical limit of 8 GPU per server can accelerate AI applications running on a single server,” said Phil Pokorny, CTO, Penguin Solutions. “This can simplify management tasks compared to scaling with multiple machines. In addition, once disaggregated this way, the GPU becomes composable, with multiple servers and the GPU sharing the same switches delivering additional flexibility.

The availability of optical technology like Credo retimers and optical modules is required for CXL environments to scale,” said Charles Fan, CEO and co-founder, MemVerge. “Software like Memory Machine X from MemVerge is also required to visualize, intelligently tier data, and share data on CXL memory.

At MSI, we are excited to present the S2206-02 platform, crafted to meet the evolving demands of modern data centers,” said Danny Hsu, GM, enterprise platform solutions, MSI. “With dual AMD EPYC 9005/9004 Series processors, support for seven PCIe add-on cards, and flexible networking options, this system delivers exceptional performance for AI, cloud computing, and other high-performance applications. We look forward to collaborating with Credo to drive innovation and help organizations achieve higher productivity.

Resource:
Video preview of OCP demonstrations:
Credo OCP Global Summit 2024 Innovation Village Walk-Thru

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E
RAIDON