• Hardware Platform (Jetson / GPU)
xavier • DeepStream Version
6.0 • JetPack Version (valid for Jetson only)
4.6 • TensorRT Version
TRT 8.0.1 • NVIDIA GPU Driver Version (valid for GPU only)
10.2
The post processing in the github is for model output dimension format in CHW, but the model you provided is in HWC format. you need to change the post processing accordingly.
I see some detection on person for resnet model. But not accurate.
Densenet has no detection. I’ll upload detection tomorrow.
Current application works only for h264 video, not for other format.
So I update application to work for any videos format and test to see it really works or not.
I have only one h264 video, so it works or doesn’t work, still don’t know.
So we need to do some revision to resnet18_baseline_att_224x224_A_epoch_249.onnx with script attached.
How to use this script (you could run this in a pytorch docker, e.g. nvcr.io/nvidia/pytorch:22.03-py3)
$ python3 -m pip install onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com
// only keep the output “onnx::MaxPool_266” and “onnx::Transpose_268”
$ python3 revise_model_for_deepstream_pose_github.py revise_model_for_deepstream_pose_github.py (1.6 KB)
dynamic_resnet18_baseline_att_224x224_A_epoch_249.onnx “onnx::MaxPool_266” “onnx::Transpose_268”