In this paper, we explore FPGA minifloat implementations (floating-point representations with non-standard exponent and mantissa sizes), and show the use of a block-floating point implementation that shares the exponent across many numbers, reducing the logic required to perform floating-point operations.

In this paper, we introduce a domain-specifc approach to overlays that leverages both software and hardware optimizations to achieve state-of-the-art performance on the FPGA for neural network acceleration.

This paper examines flexibility, and its impact on FPGA design methodology, physical design tools and computer-aided design (CAD). We describe the degrees of flexibility required to create efcient deep learning accelerators.

This white paper examines the future of deep neural networks, including sparse networks, low precision, and ultra-low precision, and compares the performance of Intel® Arria® 10 and Intel Stratix® 10 FPGAs against NVIDIA graphics processing units (GPUs).

This white paper describes how Intel FPGAs leverage the OpenCLTM platform to meet the image processing and classification needs of today's image-centric world.

This white paper provides a detailed look at the architecture and performance of our Deep Learning Accelerator intellectual property (IP) core. 

Build high-performance computer vision applications with integrated deep learning inference


 Programmers Introduction to the Intel® FPGA Deep Learning Acceleration Suite

AI with Intel FPGAs

Emerging AI Technologies on Intel Client Platforms

 Deploying Intel® FPGAs for Deep Learning Inferencing with OpenVINO™ Toolkit

Introduction to Machine Learning

Democratizing AI with Intel® FPGAs                                                         

Architecting HPC with Intel FPGAs

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.