DPU IP Product Guide
18
PG338 (v1.2) March 26, 2019
Chapter 3: DPU Configuration
Introduction
The DPU IP provides some user-configurable parameters to optimize the resources or the support of
different features. You can select different configurations to use on the preferred DSP slices, LUT, block
RAM, and UltraRAM utilization based on the programmable logic resources that are allowed. There is
also an option to determine the number of DPU cores that will be used.
The deep neural network features and the associated parameters supported by DPU is shown in the
following table.
Table 7: Deep Neural Network Features and Parameters Supported by DPU
Features
Description
Convolution
Kernel Sizes
W: 1-16 H: 1-16
Strides
W: 1-4 H:1-4
Padding_w
1: kernel_w-1
Padding_h
1: kernel_h-1
Input Size
Arbitrary
Input Channel
1 – 256*channel_parallel
Output Channel
1 – 256*channel_parallel
Activation
ReLU & LeakyReLU
Dilation
dilation * input_channel <= 256 * channel_parallel
&& stride_w == 1 && stride_h == 1
Deconvolution
Kernel Sizes
W: 1-16 H: 1-16
Stride_w
stride_w * output_channel <= 256 *
channel_parallel
Stride_h
Arbitrary
Padding_w
1: kernel_w-1
Padding_h
1: kernel_h-1
Input Size
Arbitrary
Input Channel
1 – 256 * channel_parallel
Output Channel
1 – 256 * channel_parallel
Activation
ReLU & LeakyReLU
Max Pooling
Kernel Sizes
W: 1-16 H: 1-16
Strides
W: 1-4 H:1-4
Padding
W: 1-4 H: 1-4