OpenPose Trt engine convertion failed

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.1
• NVIDIA GPU Driver Version (valid for GPU only) 470

I tried to convert openpose caffee model in deepstream, i mentioned output-blob-names=‘concat_stage7’ but when converting to trt engine i am getting error says there is no such layer.
i have checked openpose repo there the same layer they are using as output layer.

ERROR:
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:163 Could not find output layer '‘concat_stage7’

could you share a repo or more info? or can you search the openpose DeepStream implementation in Ethernet?

Most of the implementations are in c/c++, but i want to run using python3. Also i did not found a better documentation for writing custom parser function and compile.

Confusing…
do you want us to help the model parser error you orginally asked, or you have fixed it and now have another question?

need help to write parser function

Could you refer to the post_processor under GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream ?

We don’t know what the meaning of the network output, so you could refer to above sample to add a network parser callback to parse the network output.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.