Implementing Custom retrained ResNet152 Into DeepSream 5.0


I am on a Jetson TX1 with DeepStream SDK loaded. I am attempting to implement and create my own video inference application. However I am unable to find a thorough and complete guide to implementing a custom model. I have consulted the documentation, but it is unclear to me. Can someone here point me in the right direction on how to use a retrained ResNet152 .onnx?

My goal is to use a TX1 with a retrained ResNet152 in .onnx for bird species classification using a live stream.


So I have been able to load my onnx model file, trt engine file, and a video into the configuration of DeepStream. DeepStream runs from beginning to end.

I am wondering how do I get the bounding boxes drawn? I have enabled bounding boxes to be drawn, but I am thinking that I need a custom function. How am I to write a custom bounding box function for DeepStream?


Am I able to used the default bounding box parsers if I retrain a network such as yolo or detecnet or ssd

It depend on how to parse the output layer of your model, we are using built-in post process parser function i.e nvdsinfer_context_impl_output_parsing.cpp -> DetectPostprocessor::parseBoundingBox if you don’t specify your customized parser using parse-bbox-func-name config item, refer

So I am able to modify the configuration files to adapt the existing parsers for a network such as ResNet to be used by DeepStream? Therefore I do not need to create a custom parser to get the inference model working?

We use nvdsinfer_context_impl_output_parsing.cpp -> DetectPostprocessor::parseBoundingBox by default, I think the built-in parser should parse your resnet model, you can try if it can work.
If not, you should specify your own parser function as my last comment.