Hi,
Sorry for the late update.
This comes from the incompatible TensorRT serialized engine file.
Please noticed that these docker image is built on the JetPack 4.4 DP (TRT-7.1.0).
To run it on the JetPack 4.4 GA (TRT-7.1.3), please recompile the TensorRT engine first.
$ sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/jets
$ python3
>>> import torch
>>> import torch2trt
>>> import trt_pose.models
>>> import tensorrt as trt
>>> MODEL = trt_pose.models.densenet121_baseline_att
>>> model = MODEL(18, 42).cuda().eval()
>>> model.load_state_dict(torch.load('/pose/generated/densenet121_baseline_att.pth'))
>>> data = torch.randn((1, 3, 224, 224)).cuda().float()
>>> model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1 << 25, log_level=trt.Logger.VERBOSE)
>>> torch.save(model_trt.state_dict(), '/pose/generated/densenet121_baseline_att_trt.pth')
>>> exit()
$ python3 run_pose_pipeline.py /videos/pose_video.mp4
We can run the sample without issue after regenerating the TensorRT engine.
Thanks.