Like to know more bout how to deploy custom model (tensorflow model zoo) on DeepStream?

Hardware Platform (Jetson / GPU)
Jetson NX

• DeepStream Version
5.1

• JetPack Version (valid for Jetson only)
4.5-b129

• TensorRT Version
7.1.3

• Issue Type( questions, new requirements, bugs)
I like to know more bout how to deploy custom model on DeepStream. After reading the example of deploy faster_rcnn_inception_v2 mode on deepstream (Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server), and I can only successfully use this model, still really hard to use any other model from tensorflow model zoo, since it still not clear how to set up all the modification, such as Triton config file, deepstream config file, custom parser…

Is there any more explain on “how to” write the contents in Triton config file, deepstream config file, custom parser for any model from tensorflow model zoo. Like if I want to use " EfficientDet D7 1536x1536", where can I check its input output setting for create the proper Triton configuration, and also the deepstream configuration, and the custom parser?

Please refer to Gst-nvinferserver — DeepStream 5.1 Release documentation

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.