Generating Plan file for Jetson Nano developer kit 4GB

Hi,

So I used this page to generate a plan file for my tensorflow model. Now, I want to use this plan file for inference on a Jetson Nano (arrives in a few days). However, I read here that when changing to a different GPU system you should regenerate the plan file.
I generated my plan file on Google Colab with Tesla T4 CUDA 11.0. So my questions are:

  • Do I need to regenerate my plan file?
  • Since the Python API for TRT is not supported on Jetson do I use the C++ API? What are the steps for that?

Hi,

  • YES. Since the plan is generated based on a specific GPU architecture.

  • TensorRT python API can support Jetson now.

Thanks.

Thanks for your answer!

I have a few more questions:

  • On creating the plan file I was expecting some errors because my model contains some custom layers for which I have to create plugins. Is the plan file generated even if the model contains unsupported layers?
  • What’s the difference between a plan file and an engine file?
  • Is there an updated resource for creating plugins using the Python API?

Hi,

  • Please share your error with us first.
  • Same.
  • You can find a example below:
    /usr/src/tensorrt/samples/python/uff_custom_plugin

Thanks.

You misread my first question.

My model contains unsupported layers. But, the generation of plan file did NOT throw errors. Why is that? Isn’t it supposed to error out if the model had unsupported layers.

Also, isn’t tf to onnx to tensorrt the recommended workflow (instead of uff)? Where can I find an onnx custom plugin sample?

Hi,

Sorry for the missing.
A possible reason is that the parser own (ex. tf2onnx or onnx2trt) implements it with available operation already.
We can comment more if detailed information about the custom plugin is given.

Plugin is independent to the model format you used.
You can still follow the example to get the plugin.

More customized plugin can also be found in our OSS GitHub:

Thanks.

Hi,

The custom ops in question here are:

  • LeakyRelu
  • Batchnorm
  • transpose
  • relu
  • maxpool
  • Add
  • Exp
  • Unsqueeze
  • Reshape
  • div
  • mul

I used tf2onnx. Does tf2onnx automatically implement the above operations with custom trt plugins? Is that why I didn’t get any errors?

Hi,

Yes. All the operations listed above are supported in the ONNX2TRT.
You can find the detailed in our document too:

ONNX
Since the ONNX parser is an open source project, the most up-to-date information regarding the supported operations can be found here.

These are the operations that are supported in the ONNX framework:

  • Abs
  • Acos
  • Acosh

Thanks.