Tool chain for new SSD

• Hardware Platform (Jetson / GPU) AGX Xavier
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only) 450.80.02

I am planning to construct a brand new model using one of {Resnet|Mobilenet|SqueezeNet} as feature extractor and then {SSD} for object detection and of course some few enhanced idea while constructing these building blocks. After training this new network, I would like to deploy it to DeepStream pipeline using nvinfer with configuration file running at AGX Xavier. Questions:

  1. what kind of tool chain Nvidia currently provided can achieve the above goal with minimal effort?
  2. is there any resource or reference tutorial showing how to construct the above network and training process?
  3. after the new network trained, how to convert the model to the TensorRT engine file for DeepStream integration?

Thank you very much for your help.

Hi,

1. You can use our TLT toolkit for transfer learning.
You can also use other third-party frameworks (ex. PyTorch, TensorFlow, …, etc.).
Just make sure you can output the ONNX file format.

2. YES. Here are some tutorials for your reference:

3. You can feed the onnx or tlt file into Deepstream directly.

Ex.

[property]
gpu-id=0
...
onnx-file=[my_model.onnx]

Thanks.

Thanks a lot. This is very helpful.