What are you looking for ?
Advertise with us
RAIDON

EMC/XtremIO Fail to Impress

Blogged CEO Tom Isakovich, Nimbus Data

On his blog, Thomas Isakovich, CEO and founder,
Nimbus Data Systems, Inc., wrote:
emc_xtremio_nimbus_data_blog

StorageNewsletter captured some details revealed by EMC’s Chuck Hollis (Global Marketing CTO) about the unreleased XtremIO all-SSD array. Robin Harris of StorageMojo issued a less-than-positive analysis in response. To recap, EMC acquired pre-revenue XtremIO in May 2012 for a reported $430M.

Architecture of XtremIO
According to the report, XtremIO uses a scale-out architecture consisting of an IB backend and iSCSI or FC front-end ports. Each XtremIO brick consists of 7TB of usable capacity (10TB raw) in a 3U shelf. The brick appears to be an off-the-shelf general purpose server, namely a SuperMicro 6036ST-6LR according to this Google search image. Each brick supports up to 250,000 IOps.

Using a scale-out architecture in this manner is similar to the design employed by EMC Isilon: treat storage as ‘bricks’ consisting of capacity and compute resources in one box, and add more bricks as you need more storage. In theory, such an architecture enables much higher capacities and much higher performance than scale-up arrays. Surprisingly, though, XtremIO’s offering seems to provide the exact opposite: much less scalability and much less performance. Let’s take a closer look.

Limitations of the XtremIO architecture
There are numerous limitations with the XtremIO design:

  1. Scalability is surprisingly limited: According to the article, the XtremIO design only supports 8 nodes in a system. At 7TB usable per node, that’s 56TB in one system, a pretty modest amount of capacity for an enterprise storage array. In comparison, Nimbus systems scale to 500TB (400TB usable), a better than 7x scalability advantage.
  2. Density is subpar: At 10TB per 3U brick, that’s 3.3TB per U. That’s less rack density than most 15K rpm disk arrays. Nimbus’ Gemini platform on the other hand delivers 48TB per system in just 2U, or 24TB per U. That’s a 7x density advantage for the Nimbus system.
  3. Performance per brick is low: Because XtremIO is based on a commodity server, it inherits the architectural limitations of off-the-shelf servers, limiting performance to 250,000 IOps. In comparison, Nimbus’ Gemini system offers up to 1,200,000 IOps per system, a 5x performance advantage that demonstrates the strength of Nimbus’ patent-pending purpose-built hardware. Even if one assumes XtremIO scales linearly (which is questionable – see below), it would take five XtremIO bricks (at perhaps 5x the cost) to do what just one Nimbus system can do.
  4. Power consumption seems high: The article reports 700W per 10TB XtremIO brick, which works out to 70W per raw TB. This is comparable to 15K rpm disk arrays. Nimbus’ fully-redundant Gemini system on the other hand draws about 600W at 48TB, a 6x power efficiency advantage.
  5. Reliability is undisclosed: A critical requirement with flash is endurance optimization. EMC is quiet on this subject, but I suspect that they are using third-party SSDs in the SuperMicro box. Most third-party SSDs suffer from write performance degradation over time and limited warranties. By comparison, Nimbus offers an available 10-year warranty and utilizes hardware-offloaded wear-leveling technology to ensure consistent performance over time.
  6. Latency penalty: Any scale-out architecture relies on multiple hops along the backend fabric to service an IO. While bandwidth improves with scale-out, latency increases as more bricks are added. Latency is so high on some scale-out systems that they are specifically targeted towards content storage where latency is less of a concern. But this is primary storage, and latency matters. The latency penalty of scale-out may explain why XtremIO’s scalability is limited – as more bricks are added, hops increase and latency rises.
  7. Full-time dedupe penalty: XtremIO appears to require full-time inline deduplication, which adds hashes and lookups to the IO path, increasing latency. The thinking here is that flash memory is too expensive, and deduplication is required to make it affordable. Since deduplication cannot be disabled, though, performance-critical applications like databases and analytics are unfairly penalized. Nimbus Data, on the other hand, gives users flexibility to enable deduplication for portions of the system that will benefit from it (VDI, etc.) while leaving it off for latency-sensitive applications that demand the highest possible performance.
  8. No integration: EMC’s marketing is being very careful about positioning XtremIO for a very narrow use case. According to the article, XtremIO will not integrate with other EMC products nor EMC FAST. Because of this, there is no inherent ‘one platform’ advantage in buying EMC, making it just as easy for existing EMC customers to switch to a new vendor for their all-flash infrastructure.

What comes next?
EMC must protect its base of business from the all-flash onslaught being waged by Nimbus Data and others. XtremIO gives EMC an offering in the all-flash category. That may be satisfactory for now, but I am doubtful that this solution can compete effectively outside of EMC’s base of existing customers. With EMC’s worldwide storage systems marketshare at 30% according to IDC, that leaves plenty of opportunity for new leadership in next-generation flash-based primary storage.

Articles_bottom
ExaGrid
ATTOtarget="_blank"
OPEN-E