Integration model pose estimation in Deepstream SDK

Hardware Platform: Jetson NX
DeepStream Version 6.0.1
JetPack Version 4.6.1
TensorRT Version 8.2.1
Cuda Version 10.2

Hi, I am trying to integrate my onnx pose estimation model into Deepstream. This is part of my model onnx:


I converted my onnx model to engine via tensorRT and set up the configuration file, but when I launch the deespream application I get this error:

image

My final goal is to extract the keypoints and bounding boxes as a result of the inference.

sorry for the late reply, 1. how did you convert it to engine? can tensorRT tool load that engine? 2. what is the whole media pipeline? can you share the whole logs?
3. please refer to the deepsream bodypose sample: deepstream-bodypose-3d, especially you need to implement NvDsInferInitializeInputLayers if having multiple inputs, tao-sample

hello, I converted the model via:
trtexec --onnx=yolov7-w6-pose-sim-yolo.onnx --fp16 --saveEngine=yolov7-w6-pose-sim-yolo-fp16.engine --plugins=./YoloLayer_TRT_v7.0/build/libyolo.so
(ref: GitHub - nanmi/yolov7-pose: pose detection base on yolov7)

I need a model that detects people and the keypoints of these, for this I want to use the yolov7 pose estimation model which unlike body pose ( deepstream-bodypose-3d) uses a top down approach.

The pipeline is that: source, h264parser, decoder, streammux, pgie, queue, tiler, queue3, nvvidconv, queue4, nvosd, transform, sink. (I’m using for start the example: deepstream_infer_tensor_meta_test and i removed the secondry engines)

Thanks for your sharing, DeepStream nvinfer plugin can accept onnx model directly, please refer to this sample yolov7 sample, you need to implement parse-bbox-func if the model’s output is different to this sample’s.

ok thanks @fanzh , but I used my ONNX into config file but I had this error:

This is my config file:

[property]
gpu-id=0
net-scale-factor=0.003921

onnx-file= ../yolov7-w6-pose-sim-yolo.onnx 

batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
interval=2
gie-unique-id=1


network-type=100
workspace-size=3000
#engine-create-func-name=NvDsInferYoloCudaEngineGet


[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5

I will debug.

from your logs, it failed to generate TensorRT engine,
I modified deepstream-test1 to test that yolov7-pose model with four output layers, I can generate TensorRT engine. here is the log and configuration file:
log.txt (2.6 KB)
dstest1_pgie_config.txt (3.4 KB)

OK but how do I extract the bounding boxes and keypoints from these layers? I originally had an engine made in the same way you shared with me in the log, only I couldn’t get the coordinates of the bounding boxes and keypoints. I also saw the examples of deepstream-bodypose-3d, deepstream-pose-estimation, etc…
Always starting with the deepstream_infer_tensor_meta_test application, I arrived at this point in my application:

NvDsInferParseCustomYolo(outputLayersInfo, networkInfo, detectionParams, objectList);

From here the information is different than expected. Can you help me?
?

as the log in the last comment shown, there are four output layers, you need to know the meaning of each layer first, then implement a parsing function as deepstream-bodypose-3d does.

Nice! I have difficulty interpreting the layers. I initially used a python script and used the pytorch library. For example when i used this script I had this output:

  with torch.no_grad():          
      output, _ = model(image)
  print(type(output))
  print(len(output))
  print(len(output[0]))
  print(len(output[0][0]))
  print(output)
<class 'torch.Tensor'>
1
130050
57
tensor([[[5.36894e+00, 8.71477e+00, 1.32241e+01,  ..., 1.36432e+00, 1.92210e+01, 2.08646e-01],
         [1.09083e+01, 9.69437e+00, 2.14988e+01,  ..., 5.92880e+00, 1.64280e+01, 2.65798e-01],
         [1.91330e+01, 9.81835e+00, 3.46214e+01,  ..., 8.89651e+00, 1.37115e+01, 2.66302e-01],
         ...,
         [1.76024e+03, 1.01203e+03, 3.55187e+02,  ..., 1.77500e+03, 1.08716e+03, 3.54368e-01],
         [1.80393e+03, 1.00688e+03, 2.95170e+02,  ..., 1.83554e+03, 1.08505e+03, 3.38366e-01],
         [1.86751e+03, 1.01366e+03, 3.67770e+02,  ..., 1.84740e+03, 1.09743e+03, 3.18240e-01]]], device='cuda:0')

Can layers be interpreted from this? How?

there are four output layers in yolov7-pose model, but the repo yolov7-pose converts the four output layers to one output layer by adding a new op, and the working sample is base that new model yolov7-w6-pose-sim-yolo.onnx.
currently I failed to generate new model, here is the error:
root@p4station:/home/code/yolov7/yolov7-pose/YoloLayer_TRT_v7.0/script# python add_custom_yolo_op.py
Traceback (most recent call last):
File “add_custom_yolo_op.py”, line 25, in
inputs = [tensors[“745”].to_variable(dtype=np.float32),
KeyError: ‘745’
did you meet this problem?

The output layers from my onnx are:
750, 799, 848, 897

I had the same problem as you, I solved it by changing the name of the layers in the dd_custom_yolo_op.py file:

this engine was successfully generated, but if DeepStream uses this engine directly, loading engine will give an error:

ERROR: [TRT]: ModelImporter.cpp:726: While parsing node number 535 [YoloLayer_TRT → “output0”]:
ERROR: [TRT]: ModelImporter.cpp:727: — Begin node —
ERROR: [TRT]: ModelImporter.cpp:728: input: “750”
input: “799”
input: “848”
input: “897”
output: “output0”
name: “YoloLayer_TRT_0”
op_type: “YoloLayer_TRT”

ERROR: [TRT]: ModelImporter.cpp:729: — End node —
ERROR: [TRT]: ModelImporter.cpp:732: ERROR: builtin_op_importers.cpp:5428 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
here is the whole log log-0404.txt (5.4 KB)

OK, Do you have any solutions?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

please refer to this sample yolov5_gpu_optimization or DeepStream-Yolo, you might implement own engine creating function by adding custom nvinfer1::IPluginCreator, you need to set engine-create-func-name in the configuration file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.