Deepstream_pose_estimation doesn't show pose estimation skeleton

I am testing deepstream_pose_estimation.
With provided onnx file, I see some detection.

But when I use the following two models, detection just show small white circles only, no green lines.

resnet18_baseline_att_224x224_A
densenet121_baseline_att_256x256_B

What should I change?
I managed to work with mp4 file. The original program uses h264 raw stream.
Can detection accuracy drop using mp4 format?

Hey could you share your setup with us?

Which set up file? Is it config file?

I sent by private message all files (cpp file that can run mp4 file, models, config file and test video).
Only pose_estimation.onnx works (still not very good). The other models don’t work.

Do you receive my files? Thanks

I have sent my files in message. Have u received?

the device/DS version.

I am using Xavier. Jetpack is 4.4 and TensorRT 7 and Deepstream 5.0

Thanks

I think 1st, you need to make sure if the model itself can give a better output.
2nd, if the model’s output is well, then you need to check if the input w/o DS pipeline is the same, you can dump the model’s input via DeepStream SDK FAQ - #9 by mchi