Mustang-V100-MX8
Page 2
1.1 Introduction
Figure 1-1: Mustang-V100-MX8
The Mustang-V100-MX8 is a deep learning convolutional neural network acceleration
card for speeding up AI inference, in a flexible and scalable way. Equipped with Intel®
Movidius™ Myriad™ X Vision Processing Unit (VPU), the Mustang-V100-MX8 PCIe card
can be used with the existing system, enabling high-performance computing without
costing a fortune. VPUs can run AI faster, and is well suited for low power consumption
applications such as surveillance, retail and transportation. With the advantage of power
efficiency and high performance to dedicate DNN topologies, it is perfect to be
implemented in AI edge computing device to reduce total power usage, providing longer
duty time for the rechargeable edge computing equipment.
"Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit" is based on
convolutional neural networks (CNN), the toolkit extends workloads across Intel®
hardware and maximizes performance. It can optimize pre-trained deep learning model
such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine
across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural
Compute Stick, and FPGA.
Summary of Contents for Mustang-V100-MX8
Page 2: ...Mustang V100 MX8 Page II Revision Date Version Changes January 17 2019 1 00 Initial release ...
Page 8: ......
Page 9: ...Mustang V100 MX8 Page 1 Chapter 1 1 Introduction ...
Page 14: ...Mustang V100 MX8 Page 6 Chapter 2 2 Unpacking ...
Page 17: ...Mustang V100 MX8 Page 9 Chapter 3 3 Hardware Installation ...
Page 24: ...Mustang V100 MX8 Page 16 Chapter 4 4 OpenVINO Toolkit Installation Linux ...
Page 37: ...Mustang V100 MX8 Page 29 Chapter 5 5 OpenVINO Toolkit Installation Windows 10 ...
Page 58: ...Mustang V100 MX8 Page 50 Appendix A A Regulatory Compliance ...
Page 60: ...Mustang V100 MX8 Page 52 B Product Disposal Appendix B ...
Page 62: ...Mustang V100 MX8 Page 54 Appendix C C Hazardous Materials Disclosure ...