Run PaddlePaddle models on Nano

Hi,
We will run PaddlePaddle models on Nano in the near future, we have below questions:

  1. How does Nano support PaddlePaddle models? Could you provide some specific document and samples?

  2. Does TensorRT support PaddlePaddle models well ? or do we need to use other reference tool such as PaddleMobile?

Hi,

1. TensorRT is a software library. If the model is fully supported by TensorRT, you can run it on Nano.
Usually, the limitation comes from some special layer which can not find the corresponding implementation in TensorRT.
Here is our support matrix: [url]Support Matrix :: NVIDIA Deep Learning TensorRT Documentation

2. Support your pipeline will be integrated with Paddle and TensorRT.
For TensorRT part, there is no difference since the implementations are identical.
For Paddle part, please check Paddle document for more information.

Thanks.

Hi AastaLLL,

1) So does TensorRT have direct PaddlePaddle parser to support PaddlePaddle models?
If not, does TensorRT support PaddlePaddle via transferring PaddlePaddle models to ONNX models?

  1. Does the Nano hardware can be called by other inference tools, such as PaddleMobile, without using tensorRT?

Hi,

1. No. We don’t have a parser for PaddlePaddle.
But you can choose to inference your model with TensorRT in the PaddlePaddle frameworks.
ONNX is also workable. Currently, we have parsers for Caffe/TensorFlow/ONNX.

2. Suppose yes. It looks like PaddleMobile officially supports ARM platform.

Features
    <b>high performance in support of ARM CPU</b>
    support Mali GPU
    support Andreno GPU
    support the realization of GPU Metal on Apple devices
    support implementation on ZU5、ZU9 and other FPGA-based development boards
    support implementation on Raspberry Pi and other <b>arm-linux </b>development boards

Thanks.

Hi AastaLLL,

  1. Got it
  2. We have internal Paddle documents, but want to have your ideas from Nvidia for reference
    For this question, I mean does Nano GPU can be called by PaddleMobile?

Thanks for your quick reply

Hi,

Nano GPU is sm=53 architecture.
Just compile the frameworks with the corresponding flag and it will work:

nvcc -gencode arch=compute_53,code=sm_53 \
     -gencode arch=compute_53,code=compute_53 \
...

Thanks.

Hi,

Thanks