Xilinx Vitis-AI Release 1.0 | 902.8 mb
Xilinx, Inc., the leader in adaptive and intelligent computing, is pleased to announce the availability of Vitis AI Release 1.0 is Xilinx’s development platform for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards. Release 1.0 - New Features Model Zoo
- Release custom Caffe framework distribution caffe_xilinx
- Add accuracy test code and retrain code for all Caffe models
- Increase Tensorflow models to 19 with float/fixed model versions and accuracy test code, including popular models such as SSD, YOLOv3, MLPerf:ssd_resnet34, etc.
- Add multi-task Caffe model for ADAS applications
Optimizer (A separate package which requires licensing)
- Caffe Pruning
. Support for depthwise convolution layer
. Remove internal implementation-related parameters in transformed prototxt
- TensorFlow Pruning
. Release pruning tool based on TensorFlow 1.12
. Add more validations to user-specified parameters
. Bug fixes for supporting more networks
- Darknet pruning
. new interface for pruning tool
. support yolov3-spp
Quantizer
- Tensorflow quantization
. Support DPU simulation and dumping quantize simulation results.
. Improve support for some layers and node patterns, including tf.keras.layers.Conv2DTranspose, tf.keras.Dense, tf.keras.layers.LeakyReLU, tf.conv2d + tf.mul
. Move temp quantize info files from /tmp/ to $output_dir/temp folder, to support multi-users on one machine
. Bugfixes
- Caffe quantization
. Enhanced activation data dump function
. Ubuntu 18 support
. Non-unified bit width quantization support
. Support HDF5 data layer
. Support of scale layers without parameters but with multiple inputs
Compiler
- Support cross compilation for Zynq and ZU+ based platforms
- Enhancements and bug fixes for a broader set of Tensorflow models
- New Split IO memory model enablement for performance optimization
- Improved code generation
- Support Caffe/TensorFlow model compilation over cloud DPU V3E (Early Access)
Runtime
- Enable edge to cloud deployment over XRT 2019.2
- Offer the unified Vitis AI C++/Python programming APIs
- DPU priority-based scheduling and DPU core affinity
- Introduce adaptive operating layer to unify runtime’s underlying interface for Linux, XRT and QNX
- QNX RTOS enablement to support automotive customers.
- Neptune API for X+ML
- Performance improvements
DPU
- DPUv2 for Zynq and ZU+
. Support Vitis flow with reference design based on ZCU102
. The same DPU also supports Vivado flow
. All features are configurable
. Fixed several bugs
- DPUv3 for U50/U280 (Early access)
Vitis AI Library
- Support of new Vitis AI Runtime - Vitis AI Library is updated to be based on the new Vitis AI Runtime with unified APIs. It also fully supports XRT 2019.2.
- New DPU support - Besides DPUv2 for Zynq and ZU+, a new AI Library will support new DPUv3 IPs for Alveo/Cloud using same codes (Early access).
- New Tensorflow model support - There are up to 19 tensorflow models supported, which are from official tensorflow repository
- New libraries and demos - There are two new libraries “libdpmultitask” and “libdptfssd” which supports multi-task models and Tensorflow SSD models. An updated classification demo is included to shows how to uses unified APIs in Vitis AI runtime.
- New Open Source Library - The “libdpbase” library is open source in this release, which shows how to use unified APIs in Vitis AI runtime to construct high-level libraries.
- New Installation Method - The host side environment adopts uses image installation, which simplifies and unifies the installation process.
Others
- Support for TVM which enables support for Pytorch, ONNX and SageMaker NEO
- Partitioning of Tensorflow models and support for xDNNv3 execution in Tensorflow natively
- Automated Tensorflow model partition, compilation and deployment over DPUv3 (Early access)
- Butler API for following:
. Automatic resource discovery and management
. Multiprocess support – Ability for many containers/processes to access single FPGA
. FPGA slicing – Ability to use part of FPGA
. Scaleout support for multiple FPGA on same server
- Support for pix2pix models
Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards. It consists of optimized IP, tools, libraries, models, and example designs. It is designed with high efficiency and ease of use in mind, unleashing the full potential of AI acceleration on Xilinx FPGA and ACAP. Vitis AI is composed of the following key components: AI Model Zoo - A comprehensive set of pre-optimized models that are ready to deploy on Xilinx devices.AI Optimizer - An optional model optimizer that can prune a model by up to 90%. It is separately available with commercial licenses.AI Quantizer - A powerful quantizer that supports model quantization, calibration, and fine tuning.AI Compiler - Compiles the quantized model to a high-efficient instruction set and data flow.AI Profiler - Perform an in-depth analysis of the efficiency and utilization of AI inference implementation.AI Library - Offers high-level yet optimized C++ APIs for AI applications from edge to cloud.DPU - Efficient and scalable IP cores can be customized to meet the needs for many different applications Vitis AI empowers software developers to keep up with AI innovation and unifies AI application development from edge to cloud. Learn more. Xilinx is the inventor of the FPGA, programmable SoCs, and now, the ACAP. Our highly-flexible programmable silicon, enabled by a suite of advanced software and tools, drives rapid innovation across a wide span of industries and technologies - from consumer to cars to the cloud. Xilinx delivers the most dynamic processing technology in the industry, enabling rapid innovation with its adaptable, intelligent computing. Product: Xilinx Vitis AIVersion: Release 1.0Supported Architectures: x64Website Home Page : www.xilinx.comLanguage: englishSystem Requirements: Linux *Supported Operating Systems: *Size: 902.8 mb本部分内容设定了隐藏,需要回复后才能看到