Optimised model Deployment

Hi, Im pretty new to Jetson Nano and deep learning. I have the JetPack 4.6 installed with Ubuntu 18.04.
Im trying to build an LSTM deep learning model for video recognition in real time on the Jetson nano and have doubts regarding the following thing:

  1. First of all I’m trying to use the deepstream sdk which is built over the gstreamer but I don’t know where to start and what exactly will they help me in. I know open cv is not optimised for Jetson Nano hence the issue.

  2. I want to know if I should use a tensorflow model or a PyTorch model for it? Also for Inference I know that I should convert the model to onnx and then use tensorrt but I have no clue how to do that as well.

  3. I also want to know how to use the gpu cores to run my model in real time to get the max fps.
    (I guess tensorrt uses gpu).

  4. Finally I’d love any resources that are related to the above topics for a newbie like me or if anyone can help me personally that would be greatly appreciated as well!

All help is appreciated!
Thanks in advance.

Hi,

1. Please check if the below sample can meet your requirement.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_3D_Action.html

2. First, please convert your model into ONNX format.
Then the model can be deployed with TensorRT binary directly.

$ /usr/src/tensorrt/bin/trtexec --onnx=<file>

3. TensorRT use GPU for inference

4 You can also check jeston_inference for some sample as well.

Thanks.

Thanks for the reply. I still have many doubts.

  1. What other dependencies would I need to download (as of right now I have just the JetPack installed) ? As I remember I tried the onnx/tensorrt conversion once and there was an error probably due to the difference In tensorflow and tensorrt versions.

  2. The link for the deepstream application of action recognition you shared uses an image input am I wrong? I’m trying to build a pipeline for a video input.

  3. The command for onnx conversion in your reply, does it depend on the type of model I’m using? And other version dependencies? For eg. if I’m using a tensorflow model (.pb) with tensorflow 2.4.1?

Thanks in advance.

Hi,

1. If you have the ONNX model, you can use TensorRT binary to convert it directly.

2. No. The input is a file stream or live stream.

3. pb model support is deprecated. Please convert the model into ONNX format and run it with trtexec.

Thanks.

Hi @AastaLLL,
Please continue on this thread.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.