What are you looking for ?
Advertise with us
RAIDON

Availability of Second-Gen FPGA-Powered Amazon EC2 Instances (F2)

F2 instances are available in 2 sizes

Aws Jeff Barr 2201
B
y Jeff Barr, chief evangelist, AWS

 

Equipped with up to 8 AMD FPGAs, AMD EPYC (Milan) processors with up to 192 cores, HBM, up to 8 TB of SSD-based instance storage, and up to 2TB of memory, the F2 instances are available in 2 sizes, and are ready to accelerate your genomics, multimedia processing, big data, satellite communication, networking, silicon simulation, and live video workloads.

Quick FPGA Recap
Here’s how I explained the FPGA model when we previewed the
1st-gen of FPGA-powered Amazon Elastic Compute Cloud (Amazon EC2) instances

One of the more interesting routes to a custom, hardware-based solution is known as a Field Programmable Gate Array, or FPGA. In contrast to a purpose-built chip which is designed with a single function in mind and then hard-wired to implement it, an FPGA is more flexible. It can be programmed in the field, after it has been plugged in to a socket on a PC board. Each FPGA includes a fixed, finite number of simple logic gates. Programming an FPGA is ‘simply’ a matter of connecting them up to create the desired logical functions (AND, OR, XOR, and so forth) or storage elements (flip-flops and shift registers). Unlike a CPU which is essentially serial (with a few parallel elements) and has fixed-size instructions and data paths (typically 32 or 64 bit), the FPGA can be programmed to perform many operations in parallel, and the operations themselves can be of almost any width, large or small.

Since that launch, AWS customers have used F1 instances to host many different types of applications and services. With a newer FPGA, more processing power, and more memory bandwidth, the new F2 instances are an even better host for highly parallelizable, compute-intensive workloads.

Each of the AMD Virtex UltraScale+ HBM VU47P FPGAs has 2.85 million system logic cells and 9,024 DSP slices (up to 28TOPS of DSP compute performance when processing INT8 values). The FPGA Accelerator Card associated with each F2 instance provides 16GB of High Bandwidth Memory and 64GB of DDR4 memory/FPGA.

Inside F2
F2 instances are powered by 3rd-gen AMD EPYC (Milan) processors. In comparison to F1 instances, they offer up to 3x as many processor cores, up to twice as much system memory and NVMe storage, and up to 4x the network bandwidth. Each FPGA comes with 16GB HBM with up to 460GB/s bandwidth. Here are the instance sizes and specs:

Instance Name

vCPUs

FPGAs

FPGA Memory
HBM / DDR4

Instance Memory

NVMe Storage

EBS Bandwidth

Network Bandwidth

    f2.12xlarge

48

2

32GB /
128GB

512GB

1,900GB
(2x950GB)

15Gb

25Gb

    f2.48xlarge

192

8

128GB /
512GB

2,048GB

7,600GB
(8x950GB)

60Gb

100Gb

The high-end f2.48xlarge instance supports the AWS Cloud Digital Interface (CDI) to reliably transport uncompressed live video between applications, with instance-to-instance latency as low as 8 milliseconds.

Building FPGA Applications
The AWS EC2 FPGA Development Kit contains the tools that you will use to develop, simulate, debug, compile, and run your hardware-accelerated FPGA applications. You can launch the kit’s FPGA Developer AMI on a memory-optimized or compute-optimized instance for development and simulation, then use an F2 instance for final debugging and testing.

The tools included in the developer kit support a variety of development paradigms, tools, accelerator languages, and debugging options. Regardless of your choice, you will ultimately create an Amazon FPGA Image (AFI) which contains your custom acceleration logic and the AWS Shell which implements access to the FPGA memory, PCIe bus, interrupts, and external peripherals. You can deploy AFIs to as many F2 instances as desired, share with other AWS accounts or publish on AWS Marketplace.

If you have already created an application that runs on F1 instances, you will need to update your development environment to use the latest AMD tools, then rebuild and validate before upgrading to F2 instances.

FPGA Instances in Action
Here are some cool examples of how F1 and F2 instances can support unique and highly demanding workloads:

  • Genomics – Multinational pharmaceutical and biotechnology company AstraZeneca used 1,000s of F1 instances to build a world’s fastest genomics pipeline, able to process over 400K whole genome samples in under 2 months. They will adopt Illumina DRAGEN for F2 to realize better performance at a lower cost, while accelerating disease discovery, diagnosis, and treatment.
  • Satellite Communication – Satellite operators are moving from inflexible and expensive physical infrastructure (modulators, demodulators, combiners, splitters, and so forth) toward agile, software-defined, FPGA-powered solutions. Using the digital signal processor (DSP) elements on the FPGA, these solutions can be reconfigured in the field to support new waveforms and to meet changing requirements. Key F2 features such as support for up to 8 FPGAs per instance, generous amounts of network bandwidth, and support for the Data Plan Development Kit (DPDK) using Virtual Ethernet can be used to support processing of multiple, complex waveforms in parallel.
  • AnalyticsNeuroBlade‘s SQL Processing Unit (SPU) integrates with Presto, Apache Spark, and other open source query engines, delivering faster query processing and market-leading query throughput efficiency when run on F2 instances.

Things to Know
Here are a couple of final things that you should know about the F2 instances:

  • Regions – F2 instances are available in the US East (N. Virginia) and Europe (London) AWS Regions, with plans to extend availability to additional regions over time.
  • OSs – F2 instances are Linux-only.
  • Purchasing Options – F2 instances are available in On-Demand, SpotSavings Plan, Dedicated Instance, and Dedicated Host form.
Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E