How to use mmy custom .pb model in detecnet

I have my custom trained model in tensor flow in .pb extention, and i wanted to converted to tensorRT to use it with jetson inference detecnet method,

How can I convert the model and use it locally in a jetson nano?

Hi @fabian.angeloni, the TensorFlow detection models used in jetson-inference detectNet were converted using this tool from @AastaLLL :

Hi Dusty! Thanks for thw quick response.

That solution is using TensorRT not jetson-inference, is that right?

I used that tool to convert the detection models to .UFF, and then I load the .UFF in jetson-inference. However, loading custom UFF is not supported from the command-line, it is hard-coded into jetson-inference detectNet (as UFF requires some additional parameters)

@AastaLLL’s sample is using TensorRT Python API, so you could just use that sample if it’s easier to run your model with. jetson-inference uses TensorRT underneath as well.