What are you looking for ?
Advertise with us
RAIDON

AWS re:Invent: AWS New Services and Solutions Announcements

Including Amazon EC2 I8g instances, Amazon EC2 next-gen high density Storage Optimized I7ie instances, 3rd-party block storage arrays with AWS Outposts, FSx Intelligent-Tiering new storage class for Fsx, Storage Browser for Amazon S3, default data integrity protections, Amazon Elastic VMware Service, AWS Clean Rooms supports multiple clouds and data sources, and AWS Data Transfer Terminal

At AWS re:Invent, Las Vegas, NV, AWS announced news services and solutions.

Amazon EC2 I8g instances
AWS is announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) storage optimized I8g instances. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous-gen I4g instances. I8g instances use the latest 3rd Gen AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance/TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

I8g instances offer instance sizes up to 24xlarge, 768GB of memory, and 22.5TB instance storage. They are for real-time applications like relational databases, non-relational databases, streaming databases, search queries and data analytic.

I8g instances are available in the following AWS Regions: US East (N. Virginia) and US West (Oregon).

Resources:
Amazon EC2 I8g instances
Explore how to migrate your workloads to Graviton-based instances
Fast Start program and Porting Advisor for Graviton
Get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

 

Introducing Amazon EC2 next-gen high density storage optimized I7ie instances
Amazon Web Services is announcing general availability for next-gen high density Storage Optimized I7ie instances. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Xeon Scalable processors with an all-core turbo frequency of 3.2GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances have the highest local NVMe storage density in the cloud for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior-gen instances. Powered by 3rd Gen AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.

I7ie are high density storage optimized instances for workloads requiring fast local storage with high random RW performance at very low latency consistency to access large data sets. I7ie instances also deliver 40% better compute performance to run more complex queries without increasing the storage density per vCPU. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks.

I7ie instances deliver up to 100Gb of network bandwidth and 60Gb of bandwidth for Amazon Elastic Block Store (EBS).

I7ie instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, London), and Asia Pacific (Tokyo). Customers can use these instances with On Demand and Savings Plan purchase options.

Resource:
I7ie instances page

 

AWS simplifies of 3rd-party block storage arrays with AWS Outposts
Customers can attach block data volumes backed by NetApp, Inc. on-premises enterprise storage arrays and Pure Storage FlashArray to Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts directly from the AWS Management Console. This makes it easier for customers to leverage 3rd-party storage with Outposts. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.

With this enhancement, Outpost customers can combine the cloud capabilities offered by Outposts with advanced data management features, high density storage, and high performance offered by NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. Today, customers can use Amazon Elastic Block Store (Amazon EBS) and Local Instance Store volumes to store and process data locally and comply with data residency requirements. Now, with this enhancement, they can do so while leveraging the external volumes backed by compatible 3rd-party storage. By leveraging the new enhancement, customers can maximize value from their existing storage investments, while benefiting from the cloud operational model enabled by Outposts.

This enhancement is available on Outposts racks and Outposts 2U servers at no additional charge in all AWS Regions where Outposts is available, except the AWS GovCloud Regions. See the FAQs for Outposts servers and Outposts racks for the latest availability information.

User can use the AWS Management Console or CLI to attach the 3rd-party block data volumes to Amazon EC2 instances on Outposts.

Resource:
Blog: NEW: Simplifying the use of third-party block storage with AWS Outposts

 

Amazon FSx Intelligent-Tiering new storage class for FSx
AWS announces the general availability of Amazon FSx Intelligent-Tiering, a new storage class for Amazon FSx that costs up to 85% less than the FSx SSD storage class and up to 20% less than traditional HDD-based NAS storage on premises, and that brings full elasticity and intelligent tiering to NAS. This storage class is available on Amazon FSx for OpenZFS.

Using Amazon FSx, customers can launch and run fully managed cloud file systems that have familiar NAS capabilities such as point-in-time snapshots, data clones, and user quotas. Before today, customers have been moving NAS data sets for mission-critical and performance-intensive workloads to FSx for OpenZFS, using the existing SSD storage class for predictable high performance. With the new FSx Intelligent-Tiering storage class, customers can bring to FSx for OpenZFS a broad range of general-purpose data sets, including those with a large proportion of infrequently accessed data stored on low-cost HDD on premises. FSx Intelligent-Tiering delivers low-cost storage and costs up to 85% less than the FSx SSD storage class and up to 20% less than traditional HDD-based NAS storage on premises. With FSx Intelligent-Tiering, customers no longer need to provision or manage storage and get automatic storage cost optimization as data access patterns change. There are no upfront costs or commitments to use the storage class, and customers pay only for the resources used.

FSx Intelligent-Tiering can be used when creating a new FSx for OpenZFS file system in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland), and Asia Pacific (Mumbai, Singapore, Sydney, Tokyo).

Resource:
What is Amazon FSx for OpenZFS

 

Storage Browser for Amazon S3 available
Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.

With the general availability of Storage Browser for S3, end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks.

AWS welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers.

Resources:
Blog: Connect users to data through your apps with Storage Browser for Amazon S3    
UI documentation

 

Amazon S3 adds new default data integrity protections
Amazon S3 updates the default behavior of object upload requests with new data integrity protections that build upon S3’s existing durability posture. The latest AWS SDKs now automatically calculate CRC-based checksums for uploads as data is transmitted over the network. S3 independently verifies these checksums and accepts objects after confirming that data integrity was maintained in transit over the public internet. Additionally, S3 now stores a CRC-based whole-object checksum in object metadata, even for multipart uploads, which helps you to verify the integrity of an object stored in S3 at any time.

S3 has always validated the integrity of object uploads from the S3 API to storage by calculating MD5 checksums and allowed customers to provide their own pre-calculated MD5 checksums for integrity validation. S3 also supports 5 additional checksum algorithms, CRC64NVME, CRC32, CRC32C, SHA-1, and SHA-256, for integrity validations on upload and download. Using checksums for data validation is a best practice for data durability, and this new default behavior adds additional data integrity protections with no changes to your applications and at no additional cost.

Default checksum protections are rolling out across all AWS Regions in the next few weeks. To get started, user can use the AWS Management Console or the latest AWS SDKs to upload objects.

Resources:
M
ore about checksums in S3:
Blog: Introducing default data integrity protections for new objects in Amazon S3    
S3 User Guide

 

Announcing Amazon Elastic VMware Service (Preview)
AWS announces the preview of Amazon Elastic VMware Service (Amazon EVS). Amazon EVS is a new, native AWS service to run VMware Cloud Foundation (VCF) within your Amazon Virtual Private Cloud (Amazon VPC). 

Amazon EVS automates and simplifies deployments and provides a ready-to-use VMware Cloud Foundation (VCF) environment on AWS. This allows user to migrate VMware-based VMs to AWS using the same VCF software and tools you already use in your on-premises environment. 

With Amazon EVS, company can take advantage of the scale, resilience, and performance of AWS together with familiar VCF software and tools. User have the choice to self-manage or leverage AWS Partners to manage and operate your EVS deployments. With this, user keep complete control over VMware architecture and can optimize your deployments to meet the unique demands of applications. Amazon EVS provides the fastest path to migrate and operate VMware workloads on AWS.

Amazon EVS is currently available in preview for pre-selected customers and partners.

Resources:
Amazon EVS product page or contact us.

 

AWS Clean Rooms supports multiple clouds and data sources

AWS Clean Rooms announces support for collaboration with datasets from multiple clouds and data sources. This launch allows companies and their partners to easily collaborate with data stored in Snowflake and Amazon Athena, without having to move or share their underlying data among collaborators.

With AWS Clean Rooms expanded data sources and clouds support, organizations can collaborate with any company leveraging datasets across AWS and Snowflake, without any party having to move, reveal, or copy their underlying datasets. This launch enables companies to collaborate on the most up-to-date data with zero extract, transform, and load (zero-ETL), eliminating the cost and complexity associated with migrating datasets out of existing environments. For example, a media publisher with data stored in Amazon S3 and an advertiser with data stored in Snowflake can analyze their collective datasets to evaluate the advertiser’s spend without having to build ETL data pipelines, or share underlying data with one another. We are just getting started, and will continue to expand the ways in which customers can securely collaborate in AWS Clean Rooms while maintaining control of their records and information.

With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake, to generate unique insights about advertising campaigns, investment decisions, and research and development.

For information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table.

Resource:
AWS Clean Rooms

 

AWS Data Transfer Terminal for high-speed data uploads
AWS announces the launch of AWS Data Transfer Terminal, a secure, physical location where you can bring your storage devices, connect directly to the AWS network, and upload data to AWS including Amazon S3, Amazon EFS, and others using a high throughput connection. Currently, Data Transfer Terminals are located in Los Angeles, and New York. User can reserve a time slot to visit your nearest Data Transfer Terminal facility to upload data.

AWS Data Transfer Terminals are for customer scenarios that create or collect large amounts of data that need to be transferred to the AWS cloud quickly and securely on an as-needed basis. These use cases span various industries and applications, including video production data for processing in the media and entertainment industry, training data for Advanced Driver Assistance Systems (ADAS) in the automotive industry, migrating legacy data in the financial services industry, and uploading equipment sensor data in the industrial and agricultural . By using Data Transfer Terminal, user can significantly reduce the time it takes to upload large amounts of data, enabling you to process ingested data within minutes, as opposed to days or weeks. Once data is uploaded to AWS, user can efficiently analyze large datasets with Amazon Athena, train and run ML models with ingested data using Amazon SageMaker, or build scalable applications using Amazon Elastic Compute Cloud (Amazon EC2).

Resources:
Data Transfer Terminal product page and documentation.
To get started, make a reservation at nearby Data Transfer Terminal in the AWS Console.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E