How to apply lite-hrnet engine in deepstream 6.1 sdk

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
nvidia rtx 3090ti

• DeepStream Version
6.1-triton(docker)

• Issue Type( questions, new requirements, bugs)
question

I’m using a custom yolov5 engine for primary gie on deepstream-test5-app

this is my custom yolov5 engine config file

image

I want to apply the lite-HRNET engine,
Is HRNET possible as I applied yolov5?

do you mean to generate lite-HRNET yolov5 engine?

I mean, I asked if I could apply lite-hrnet as well as apply yolov5 engine.

Currently, my lite-hrnet model is a pth file, but when I convert it to onnx and apply it to deepstream, I ask what more is needed and how to apply it.

nvinfer dose not support pth file directly, please refer to inputs-and-outputs, yes, you might convert it to onnx model first.

I already know the fact that I need to convert the pth file to onnx and apply it

What I want to know is whether there is anything else to modify in the config file or deepstream-test5-app code besides the onnx file.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

no, please refer to this deepstream yolo sample: yolo_deepstream/config_infer_primary_yoloV7.txt at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.