I’m in a position, as an embedded systems student, where I want to deploy a model to a little object detection service. So far, I’ve just proceeded by wrapping the prebuilt python examples, hoping that I could just drop in an appropriate model from someone else when it comes time to do real detection. So say I want to deploy This Model. If I just make a new conf.txt with onnx-file=‘the_onnx_file_location’, I get an error relating to the model not being built with an explicit batch dimention. Several other forum posts have target this error in the abstract, but I’m not sure how to make use of the answers to those posts in my position- is there a way I can recompile the onnx model with an explicit batch dimention? Do I need to change my application in some way to consume the bad model definition?
Could you share the detailed about how do you deploy SSD-MobilenetV1 with deepstream with us?
Well, that’s the problem- I can’t get it deployed. I have moved the problem forward from what was halting me earlier though- I’ll document my process here.
This whole thing is happening in python. I’m using the nvinfer gst pipeline component for inference- feeding it a configuration file with
I’ve made sure that “force-fixed-batch-size” isn’t present in the configuration file- that fixed the first problem, but created a new one.
Now the network was failing to load because it had some values floating around as Int8 layers- so I used the onnx_graphsurgeon tool from tensorrt to go in and manually convert the whole thing to float32.
The next error related to some input layer not being an initializer? frankly, the record from the error itself is lost- but I fixed it using the info from this document.
Now I have a model that can be consumed by the nvinfer node without throwing errors, but…
despite throwing no errors, the deepstream app doesn’t run?
So now I’m at: How do I debug deepstream/tensorrt when it isn’t throwing errors at me?
Is it possible that the deepstream do run but doesn’t enable a DISPLAY output ?
Could you share which command do you launch with us first?