Design and Implementation of Deep Learning 2D Convolutions on modern CPUs

V Kelefouras, G Keramidas

Research output: Contribution to journalArticlepeer-review

6 Downloads (Pure)

Abstract

In this article, a new method is provided for accelerating the execution of convolution layers in Deep Neural Networks. This research work provides the theoretical background to efficiently design and implement the convolution layers on x86/x64 CPUs, based on the target layer parameters, quantization level and hardware architecture. The proposed work is general and can be applied to other processor families too, e.g., Arm. The proposed work achieves high speedup values over the state of the art, which is Intel oneDNN library, by applying compiler optimizations, such as vectorization, register blocking and loop tiling, in a more efficient way. This is achieved by developing an analytical modelling approach for finding the optimization parameters. A thorough experimental evaluation has been applied on two Intel CPU platforms, for DenseNet-121, ResNet-50 and SqueezeNet (including 112 different convolution layers), and for both FP32 and int8 input/output tensors (quantization). The experimental results show that the convolution layers of the aforementioned models are executed from x1.1 up to x7.2 times faster.
Original languageEnglish
Number of pages0
JournalIEEE Transactions on Parallel and Distributed Systems
Volume0
Issue number0
Publication statusPublished - 4 Oct 2023

Fingerprint

Dive into the research topics of 'Design and Implementation of Deep Learning 2D Convolutions on modern CPUs'. Together they form a unique fingerprint.

Cite this