The sequence above allows to recover the pose component to process the video with it, but how to get it running with CSI Raspberry v2 camera?
Also how to patch other containers but the Deepstream & pose container from the cloud native set of four containers?
while the latter command launches the container with the access to cameras, the argument to load inputs from camera by the given script is unknown.
is it supported? like python3 run_pose_pipeline.py /dev/video0 or similar?
BERT is for audio, not for the camera.
Gaze follows the same workflow to read the camera data.
So you can modify the /utils/video.py located in the nvcr.io/nvidia/jetson-gaze:r32.4.2.
The Dockerfile for these containers is not public.
@AastaLLL yes,
but the complication with BERT/gaze is that they won’t run as is from the repository GitHub - NVIDIA-AI-IOT/jetson-cloudnative-demo: Multi-container demo for Jetson Xavier NX and Jetson AGX Xavier on JP 4.4.1 because of the difference in versions of the container DP4.4 versus jetson OS 4.4.1 environment. That implies that TRT version mismatch prevents .engine to run at least for one of them, as it seems to me
for the pose we used the series of steps above to redo the .engine file, as it seems to me.
probably similar adjustment will be required for gaze also etc.
Step to reproduce the issue - run
./run_demo.sh
from the cloud repository
that fires up 4 containers, but as is of them only works deepstream container
running the gaze container from NGC:
sys: x11
coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
INVALID_STATE: std::exception
INVALID_CONFIG: Deserialize the cuda engine failed.
Assertion fail in file 'trtNet.cpp' line 146: _engine is null
terminate called after throwing an instance of 'std::exception'
what(): std::exception
@AastaLLL, for the pose container we used steps below to re-generate the engine file, right? There should be possibly similar solution for the gaze container?
import torch
>>> import torch2trt
>>> import trt_pose.models
>>> import tensorrt as trt
>>> MODEL = trt_pose.models.densenet121_baseline_att
>>> model = MODEL(18, 42).cuda().eval()
>>> model.load_state_dict(torch.load('/pose/generated/densenet121_baseline_att.pth'))
>>> data = torch.randn((1, 3, 224, 224)).cuda().float()
>>> model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1 << 25, log_level=trt.Logger.VERBOSE)
>>> torch.save(model_trt.state_dict(), '/pose/generated/densenet121_baseline_att_trt.pth')
>>> exit()