for some code example on python for inference classifier I need to load a plan file but don’t know how to get it and if it’s possible for this board since other post have been memory problems.
The TensorRT plan doesn’t support portability.
So you will need to generate it directly on the target device (Nano 2GB).
You can serialize the file with following command:
$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] --saveEngine=output.plan
It is possible to get an OOM issue since Nano 2GB 's resource is quite limited.
But this depends on how complicated the model you use.
It should be fine if a light-weight model is testing.
You can find some working cases in our benchmark page below: