What are you looking for ?
Advertise with us
RAIDON

IntelliProp Omega Memory Fabric Chips

Allowing for dynamic allocation and sharing of memory across compute domains, both in and out of server

IntelliProp, Inc. intends to deliver its Omega Memory Fabric chips.

The chips incorporate the Compute Express Link (CXL) Standard, along with firm’s Fabric Management Software and Network Attached Memory (NAM) system. In addition, the company announced the availability of three field-programmable gate array (FPGA) solutions built with its Omega Memory Fabric.

This latter eliminates memory bottleneck and allows for dynamic allocation and sharing of memory across compute domains both in and out of the server, delivering on the promise of Composable Disaggregated Infrastructure (CDI) and rack scale architecture. IntelliProp’s memory-agnostic innovation will lead to the adoption of composable memory and transform data center energy, performance, efficiency and cost.

As data continues to grow, database and AI applications are being constrained on memory bandwidth and capacity. At the same time billions of dollars are being wasted on stranded and unutilized memory. According to a recent Carnegie Mellon/Microsoft report, Google stated that average DRAM utilization in its datacenters is 40%, and Microsoft Azure said that 25% of its server DRAM is stranded.

“IntelliProp’s efforts in extending CXL connection beyond simple memory expansion demonstrates what is achievable in scaled out, composable data center resources,” said Jim Pappas, chairman, CXL Consortium. “Their advancements on both CXL and Gen-Z hardware and management software components has strengthened the CXL ecosystem.”

Experts agree that memory disaggregation increases memory utilization and reduces stranded or underutilized memory. Today’s remote direct memory access (RDMA)-based disaggregation has too much overhead for most workloads and virtualization solutions are unable to provide transparent latency management. The CXL standard offers low-overhead memory disaggregation and provides a platform to manage latency.

“History tends to repeat itself. NAS and SAN evolved to solve the problems of over/under storage utilization, performance bottlenecks and stranded storage. The same issues are occuring with memory,” stated John Spiers, CEO, IntelliProp. “Our trailblazing approach to CXL technology unlocks memory bottlenecks and enables next-generation performance, scale and
efficiency for database and AI applications. For the first time, high-bandwidth, petabyte-level memory can be deployed for vast in-memory datasets, minimizing data movement, speeding computation and greatly improving utilization. We firmly believe IntelliProp’s technology will drive disruption and transformation in the data center, and we intend to lead the adoption of composable memory.”

Omega Memory Fabric/ NAM System, Powered by IntelliProp’s ASIC
Omega Memory Fabric and Management Software enables the enterprise composability of memory, and CXL devices, including storage. Powered by IntelliProp’s ASIC, the Omega Memory Fabric based NAM System and software expands the connection and sharing of memory in and outside the server, placing memory pools where needed. The Omega NAM is for AI, ML, big data, HPC, cloud and hyperscale/enterprise data center environments, specifically targeting applications requiring large amounts of memory.

“In a survey IDC completed in early 2022, almost half of enterprise respondents indicated that they anticipate memory-bound limitations for key enterprise applications over time,” said Eric Burgener, research VP, infrastructure systems, platforms and technologies group, IDC. “New memory pooling technologies like what IntelliProp is offering with their NAM system will help to address this concern, enabling dynamic allocation and sharing of memory across servers with high performance and without hardware slot limitations. The composable disaggregated infrastructure market that IntelliProp is playing in is an exciting new market that is expected to grow at a 28.2 percent five-year compound annual growth rate to crest at $4.8 billion by 2025.”

With Omega Memory Fabric and Management Software, hyperscale and enterprise customers will be able to take advantage of multiple tiers of memory with predetermined latency. The system will enable large memory pools to be placed where needed, allowing multiple servers to access the same dataset. It also allows new resources to be added with a simple hot plug, eliminating server downtime and rebooting for upgrades.

“IntelliProp is on to something big. CXL disaggregation is key, as half of the cost of a server is memory. With CXL disaggregation, they are taking memory sharing to a whole new level,” said Marc Staimer, DragonSlayer analyst. “IntelliProp’s technology makes large pools of memory shareable between external systems. That has immense potential to boost data center performance and efficiency while reducing overall system costs.”

Omega Memory Fabric Features, incorporating the CXL Standard

  • Scale and share memory outside the server
  • Dynamic multi-pathing and allocation of memory
  • E2E security using AES-XTS 256 w/ addition of integrity
  • Supports non-tree topologies for peer-to-peer
  • Direct path from GPU to memory
  • Management scaling for large deployments using multi-fabrics/subnets and distributed managers
  • Direct memory access (DMA) allows data movement between memory tiers efficiently and without locking up CPU cores
  • Memory agnostic and up to10x faster than RDMA

“AI is one of the world’s most demanding applications, in terms of compute and storage. The prospects of using ML in genomics, for example, requires exascale compute and low latency access to petabytes of storage. The ability to dynamically allocate shareable pools of memory over the network and across compute domains is a feature we are very excited about,” says Nate Hayes, co-founder and board member, RISC AI, Inc. “We think the fabric from IntelliProp provides the latency, scale and composable disaggregated infrastructure for the next generation AI training platform we are developing at RISC AI, and this is why we are planning to integrate IntelliProp’s technology into the high performance RISC-V processors that we will be manufacturing.”

Omega Memory Fabric Solutions Bring Future CXL Advantages to Data Centers
The company unveiled 3 FPGA solutions as part of its Omega Fabric product suite. The solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale across dozens to thousands of host nodes, consume less energy since data travels with fewer hops and enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory), allowing for lower total cost of ownership (TCO).

Omega Memory Fabric Solutions

  • Omega Adapter
    • Enables the pooling and sharing of memory across servers
    • Connects to the IntelliProp NAM array
  • Omega Switch
    • Enables the connection of multiple NAM arrays to multiple servers through a switch
    • Targeted for large deployments of servers and memory pools
  • Omega Fabric Manager (open source)
    • Enables key fabric management capabilities:
      • End-to-end encryption over CXL to prevent applications from seeing the contents in other applications’ memory along with data integrity
      • Dynamic multi-pathing for redundancy in case links go down with automatic failover
      • Supports non-tree topologies for peer-to-peer for things like GPU-to-GPU computing and GPU direct path to memory
      • Enables Direct Memory Access for data movement between memory tiers without using the CPU

The IntelliProp Omega Memory Fabric solutions are available as FPGA versions and will have the full features of the Omega Fabric architecture. The IntelliProp Omega ASIC based on CXL technology will be available in 2023.

Comments

IntelliProp belongs to the limited number of players who offers in and out of server memory aggregation. The concept of NAM makes sense as systems are coupled in a scale-out model in a shared-nothing or everything mode. The product is using native CXL within servers and Gen-Z across servers or outside of them.

This is typically an oem product and we expect an acceleration of deals in CXL. IntelliProp targets Dell, HPE and Lenovo waiting next generation Intel processors. But the team plans to address hyperscalers directly. In terms of pricing, a model by card is chosen.

John Spiers left Liqid a few months ago to join IntelliProp as CEO. We found IntelliProp.com that confirms the arrival of Spiers as CEO but also intellipropipcores.com who continues to show Hiren Patel as CEO, instead of CTO, probably just a miss, as this press release above is posted there as well. This second site provides Products and Services and Technology details especially how the team leverages Gen-Z for the fabric piece.

The CXL Consortium groups today more than 150 companies being board members, contributors or adopters with large known vendors and emerging dedicated ones. It was a hot topic at the recent OCP Summit. Today's specification is CXL v3.0.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E