How to use the trt engine in dusty nv posenet?

trt.engine (29.2 MB)

I have generated the trt.engine from tao toolkits bpnet, may I know how to use it in the
i.e. how to add it in the network list so that I can call it in the
posenet --network= “option”.


You can try to add the model configure below:



I was told that trt.engine cannot be converted to onnx…

So, my plan of using tao toolkit bpnet to re-train pose model for dusty nv inference is not possible. :<
Pls correct me if I am wrong…
The problem right now is that the dusty nv posenet is not vert accurate for my application, I need to find a way to increase the accuracy…

Hi @AK51, I’ve not tried the TAO Toolkit Pose Detection models through jetson-inference before. It very likely requires different pre/post-processing than what is in jetson-inference poseNet code today (which is made to support the models from trt_pose). I will take a look into this, but currently I would recommend running the TAO Pose models through DeepStream.

Dear Dusty,

Thanks for your reply.
I prefer to use yours 'coz deepstream is not easy to be modified and it does not support rtsp as I have heard.
For the most important part, I can’t find any good deepstream video tutorial. >_<

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.