DPU IP Product Guide
5
PG338 (v1.2) March 26, 2019
IP Facts
Introduction
The Xilinx® Deep Learning Processor Unit (DPU) is
a configurable engine dedicated for convolutional
neural network. The computing parallelism can be
configured according to the selected device and
application. It includes a set of efficiently optimized
instructions. It can support most convolutional
neural networks, such as VGG, ResNet, GoogLeNet,
YOLO, SSD, MobileNet, FPN, etc.
Features
•
One slave AXI interface for accessing
configuration and status registers.
•
One master interface for accessing instructions.
•
Supports configurable AXI master interface with
64 or 128 bits for accessing data.
•
Supports individual configuration of each
channel.
•
Supports optional interrupt request generation.
•
Some highlights of DPU functionality include:
o
Configurable hardware architecture includes:
B512, B800, B1024, B1152, B1600, B2304,
B3136, and B4096
o
Configurable core number up to three
o
Convolution and deconvolution
o
Max pooling
o
ReLu and Leaky ReLu
o
Concat
o
Elementwise
o
Dilation
o
Reorg
o
Fully connected layer
o
Batch Normalization
o
Split
DPU IP Facts Table
Core Specifics
Supported
Device Family
Zynq®-7000 SoC and
Ult™ MPSoC Family
Supported User
Interfaces
Memory-mapped AXI interfaces
Resources
See
Provided with Core
Design Files
Encrypted RTL
Example Design
Verilog
Constraint File
Xilinx Design Constraints (XDC)
Supported
S/W Driver
Included in PetaLinux
Tested Design Flows
Design Entry
Vivado® Design Suite
Simulation
N/A
Synthesis
Vivado Synthesis
Support
Notes
:
1.
Linux OS and driver support information are available from
DPU TRD or DNNDK.
2.
If the requirement is on Zynq-7000 SoC, contact your local
FAE.
3.
For the supported versions of the tools, see the
Vivado
Design Suite User Guide: Release Notes Installation, and
Licensing