What are you looking for ?
Advertise with us
Advertise with us

Cervoz Technology Solution for Distributed AI Infrastructure

With compact 3D TLC SSD and DDR4 DRAM

Cervoz Technology Co. Ltd. introduces its solution for distributed AI infrastructure.

Cervoz Distributed Ai Intro 2204Advancements in digitalization have brought the company to this era of big data and rocketed the development of AI, ML, MEC, HPC, etc.

Distributed AI
As IoT and 5G technologies thrive, increasing devices collect and generate amounts of data on a millisecond basis. Sending inundated data back and forth to the cloud and center for storage and processing is relatively inefficient and expensive. Also, there is not enough bandwidth when increasing machines are becoming interconnected. With distributed AI deployment, mundane AI tasks such as data collating and basic processing are allocated to edge devices with moderate memory and computing power like industrial endpoints and local 5G base stations. These edge appliances are imbued with enough intelligence to ensure only relevant and valuable data is sent to central AI for crunching. Allowing it to learn faster, infer deeper, and achieve more with lower costs, less bandwidth, and power consumption. One trending application case is IVA (edge-based Intelligent Video Analytics) solutions for traffic management. IVA helps to ease traffic congestion by monitoring the road conditions in real-time and taking timely action locally, without dependency on time-consuming video transmission and analytics from the cloud. It also reduces the cost of massive data transmission and intense cloud services from the broad and geographical deployment of monitors, signals, and devices.

Cervoz Ai D 2204Memory in Distributed AI
Continuous data input from edge endpoints has incrementally trained and optimized the neuro networks and central ML models. The retrained central AI then feedback refined algorithms to the edge for better predecessor results. It’s a continuous cycle of improvement. Edge AIs need NVM to store data, model code, weights, and algorithms from the cloud, while volatile memories are essential for real-time data processing. These memory solutions require high capacity and transmission speed to maximize the efficiency of AI computing while with compact form factors to adapt dense edge devices such as embedded PCs or gateways. Cost-effectiveness and power efficiency are also critical for scalable and extensive deployment.

Cervoz Ssd Scheme Ai 2204

 

Cervoz Ssd Families 2204Unlike traditional 2D NAND with flat and one-story cell structures, the vertical structure of 3D NAND flash memory technology has taken advantage of the space above and significantly increased performance and capacity for each unit while consuming less power and costing less/GB. The company offers a range of high IO/s 3D TLC solutions in diverse form factors, from mainstream 2.5″ SSD to slim and compact embedded modules, including M.2 2242 2280, mSATAhalf-size mSATA, and half Slim. 


Cervoz Dram Ai 2204
The company’s DDR4 32GB 3200MHz DRAM in DIMM and SO-DIMM modules feature capacity, transmission speed, and reliable JEDEC-standard, making it for working with massive datasets in AI applications. Furthermore, the DDR4 series consumes 20% less power than its predecessor, DDR3. There’s also DDR4 So-DIMM Very Low Profile industrial memory module, the miniature version, saving 40% space and is suitable for any edge device with a dense enclosure.

Read also :
Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E
RAIDON