Cloudian Support for Nvidia GPUDirect Acceleration for Object Storage Delivering Performance of over 200GB/s from HyperStore System
Simplifies AI data storage with cost-effective, scalable, high-performance object storage solution.
This is a Press Release edited by StorageNewsletter.com on November 29, 2024 at 2:17 pmCloudian, Inc. announced its integration with Nvidia Magnum IO GPUDirect Storage technology, delivering performance of over 200GB/s from a HyperStore system.
Cloudian HyperStore with GPUDirect access simplifies the management of AI training and inference datasets – at petabytes and exabytes scales – while reducing costs by eliminating the need for complex data migrations and legacy file storage layers.
Key benefits of Nvidia GPUDirect Storage for object storage in AI training and inference workflows:
- Limitless scalability: Expands effortlessly to exabyte scale without disruption, supporting growing AI datasets without adding management complexity.
- Reduced costs and no data migrations: Removes legacy file layers and enables a single, unified data lake without the need for constant data movement between tiers.
- High performance: Delivers over 200GB/s from a single system with performance sustained over a 30-minute period without the use of data caching.
- Maximized CPU for AI workloads: Slashes CPU overhead by 45% during data transfers, freeing computational resources for AI processing.
- No kernel modifications: Eliminates the security exposure of vendor-specific kernel modifications, reducing security vulnerabilities
- Integrated metadata: Rich metadata facilitates rapid search without the need for external databases.
“Cloudian is proud to be at the forefront of transforming how enterprises and AI hyperscalers harness data to realize the power of AI,” said Michael Tso, CEO. “For too long, AI users have been saddled with the unnecessary complexity and performance bottlenecks of legacy storage solutions. With GPUDirect Storage integration, we are enabling AI workflows to directly leverage a simply scalable storage architecture so organizations can unleash the full potential of their data.”
“At Supermicro, we’re committed to delivering the most advanced and efficient solutions for AI and deep learning,” said Michael McNerney, SVP, marketing and network security, Supermicro, Inc. “Cloudian’s integration of Nvidia GPUDirect Storage with the HyperStore line of object storage appliances based on Supermicro systems – including the Hyper 2U and 1U servers, the high-density SuperStorage 90-bay storage servers, and the Simply Double 2U 24-bay storage servers – represents a significant innovation in the use of object storage for AI workloads. This will enable our mutual customers to deploy more powerful and cost-effective AI infrastructure at scale.”
“Fast, consistent, and scalable performance in object storage systems is crucial for AI workflows,” said Rob Davis, VP, storage technology, Nvidia Corp. “It enables real-time processing and decision-making, which are essential for applications like fraud detection and personalized recommendations.”
Simplifies data management, exabyte scale eliminates data migration
Legacy file-based storage systems in AI workflows often require frequent data movement between long-term and high-speed storage, adding management complexity. With Cloudian’s solution, AI training and inference happen directly on the data in-place, accelerating workflows and eliminating frequent migration. HyperStore’s limitless scalability enables AI data lakes to grow to EB levels, while its centralized management ensures simple, unified control across multi-data center and multi-tenant environments.
Fast throughput for higher GPU utilization
Nvidia GPUDirect Storage with Nvidia ConnectX and BlueField networking technologies optimize data transfer speeds by enabling direct communication between Nvidia GPUs and multiple Cloudian storage nodes, bypassing the CPU. This direct parallel data transfer delivers consistent and scalable performance over 200GB/s from a HyperStore system – as measured on the industry-standard GOSBench benchmark over a sustained period without the use of data caching. As throughput can be easily and economically scaled, organizations can achieve better GPU utilization and lower GPU communications latency.
Reduces storage costs
Managing the enormous datasets needed for AI workflows can be both costly and resource intensive. Cloudian’s software-defined platform helps address these challenges by eliminating the need for a separate file storage layer. With AI workflows occurring directly within the object-based data lake, organizations can streamline data management while significantly reducing operational and capital expenses, as well as overall complexity.
No kernel level modifications
GPUDirect for Object Storage requires no vendor-driven kernel-level modifications. Unlike file solutions, this approach reduces potential vulnerabilities typically associated with kernel changes. By eliminating the need for such alterations, it simplifies system administration, decreases attack surfaces, and lowers the risk of security breaches.
Integrated metadata for simplicity and accelerated search
Metadata plays a crucial role in AI workflows by enabling rapid data discovery, retrieval, and access control. Cloudian accelerates AI data searches with integrated metadata support that allows for easy tagging, classification, and indexing of large datasets. Unlike legacy file-based systems, which depend on rigid directory structures and separate databases for metadata management, Cloudian natively handles metadata within the object storage platform, simplifying workflows and speeding up AI training and inference processes.
Enhanced data security
Data privacy and security are top priorities for enterprises adopting AI, as noted by Forrester analysts. Cloudian addresses these concerns with the industry’s most comprehensive range of security features. These include advanced access controls, encryption protocols, integrated key management, and S3 Object Lock for ransomware protection, helping ensure that sensitive AI data remains safe and secure throughout its lifecycle.
Reduced CPU consumption
The company’s integration with Nvidia’s GPUDirect Storage technology enables direct data transfers between storage systems and GPU memory, bypassing the CPU. This direct path reduces CPU utilization by 45% during data transfers, allowing the CPU to focus on other tasks and improving overall system efficiency.
Cloudian HyperStore with Nvidia Magnum IO GPUDirect Storage technology is available.
Resource:
Blog: Cloudian Shatters AI Storage Barriers with Direct GPU-to-Object Storage Access
“At Central Technology, we are committed to exploring innovative solutions to advance our AI initiatives,” said Sam Walsh, regional director, Central Technology. “Cloudian’s integration of GPUDirect for Object Storage provides a powerful avenue to streamline our data management and enhance our AI workflows. This technology not only promises improved scalability and performance but also simplifies data management, aligning perfectly with our goal to enhance AI capabilities while optimizing infrastructure costs. As a leading Managed Service Provider nationwide, we are well-equipped to leverage our insights and processes to offer tailored solutions for our customers as the AI market continues to evolve.”
“As pioneers in AI-driven process optimization, ControlExpert is excited about Cloudian’s integration of GPUDirect for Object Storage, especially as we are already leveraging Cloudian S3 in our operations.” said Dr. Sebastian Schoenen, director, innovation and technology, ControlExpert GmbH. “This technology has the potential to significantly simplify our data management and accelerate our AI workflows by reducing complex data migrations and providing direct, high-speed access to our vast datasets. This aligns perfectly with ControlExpert’s mission to drive digital transformation in our industry.”
“At Softsource vBridge, we’ve seen firsthand how data management challenges can hinder AI adoption,” said David Small, group technology officer, Softsource vBridge. “Cloudian’s GPUDirect for Object Storage will simplify the entire AI data lifecycle, which could be the key to democratizing AI across various business sectors, allowing companies of all sizes to harness the power of their data. We’re particularly excited about how this could accelerate AI projects for our mid-market clients who have previously found enterprise AI solutions out of reach.”