When I tried to run the demo here GitHub - NVIDIA-AI-IOT/jetson-cloudnative-demo: Multi-container demo for Jetson Xavier NX and Jetson AGX Xavier, I can only run one demo [DeepStream Container with people detection], for the other three. I can not run it. the following shows the error info. I find a solution from here, Jetson Xavier NX L4T 32.4.3 Cloud-Native Demo Issues, I’m not sure if it is the solution for my problem. I tried to recompile the TensorRT engine first. but I don’t know how to do that, can you tell me? thank you!
(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_is_empty: assertion ‘GST_IS_CAPS (caps)’ failed
(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_truncate: assertion ‘GST_IS_CAPS (caps)’ failed
(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_fixate: assertion ‘GST_IS_CAPS (caps)’ failed
(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_get_structure: assertion ‘GST_IS_CAPS (caps)’ failed
(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_structure_get_string: assertion ‘structure != NULL’ failed
(python3:1): GStreamer-CRITICAL **: 19:07:47.727: gst_mini_object_unref: assertion ‘mini_object != NULL’ failed
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 960x544 (x 11), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 960, nFrameHeight = 540
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (896) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=3, duration=-1
Using winsys: x11
[TensorRT] ERROR: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “run_pose_pipeline.py”, line 37, in
engine = PoseEngine(ENGINE_PATH)
File “/pose/pose.py”, line 86, in init
self.module.load_state_dict(torch.load(path))
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 830, in load_state_dict
load(self)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 825, in load
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
File “/usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py”, line 309, in _load_from_state_dict
self.context = self.engine.create_execution_context()
AttributeError: ‘NoneType’ object has no attribute ‘create_execution_context’
for the Voice Demo, I got the following errors.
$ sudo docker run --runtime nvidia -it --rm --network host nvcr.io/nvidia/jetson-voice:r32.4.2 trtserver --model-control-mode=none --model-repository=models/repository/jasper-asr-streaming-vad/
I0929 19:16:40.715338 1 server.cc:115] Initializing TensorRT Inference Server
I0929 19:16:40.880480 1 server_status.cc:55] New status tracking for model ‘ctc-decoder-cpu-trt-vad-streaming’
I0929 19:16:40.880605 1 server_status.cc:55] New status tracking for model ‘feature-extractor-trt-vad-streaming’
I0929 19:16:40.880688 1 server_status.cc:55] New status tracking for model ‘jasper-asr-trt-ensemble-vad-streaming’
I0929 19:16:40.880843 1 server_status.cc:55] New status tracking for model ‘jasper-trt-decoder-streaming’
I0929 19:16:40.881277 1 server_status.cc:55] New status tracking for model ‘jasper-trt-encoder-streaming’
I0929 19:16:40.881331 1 server_status.cc:55] New status tracking for model ‘voice-activity-detector-trt-ctc-streaming’
I0929 19:16:40.881751 1 model_repository_manager.cc:675] loading: ctc-decoder-cpu-trt-vad-streaming:1
I0929 19:16:40.882264 1 model_repository_manager.cc:675] loading: feature-extractor-trt-vad-streaming:1
I0929 19:16:40.882641 1 model_repository_manager.cc:675] loading: jasper-trt-decoder-streaming:1
I0929 19:16:40.882993 1 custom_backend.cc:202] Creating instance ctc-decoder-cpu-trt-vad-streaming_0_0_cpu on CPU using libctcdecoder-cpu.so
I0929 19:16:40.883064 1 model_repository_manager.cc:675] loading: jasper-trt-encoder-streaming:1
I0929 19:16:40.883368 1 custom_backend.cc:205] Creating instance feature-extractor-trt-vad-streaming_0_gpu0 on GPU 0 (7.2) using libfeature-extractor.so
I0929 19:16:40.883559 1 model_repository_manager.cc:675] loading: voice-activity-detector-trt-ctc-streaming:1
I0929 19:16:40.884932 1 custom_backend.cc:205] Creating instance voice-activity-detector-trt-ctc-streaming_0_gpu0 on GPU 0 (7.2) using libvoice-activity-detector.so
I0929 19:16:40.930904 1 model_repository_manager.cc:829] successfully loaded ‘voice-activity-detector-trt-ctc-streaming’ version 1
E0929 19:16:43.708240 1 logging.cc:43] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
E0929 19:16:43.709205 1 logging.cc:43] INVALID_STATE: std::exception
E0929 19:16:43.709377 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E0929 19:16:43.709662 1 model_repository_manager.cc:832] failed to load ‘jasper-trt-decoder-streaming’ version 1: Internal: unable to create TensorRT engine
E0929 19:16:43.710278 1 logging.cc:43] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
E0929 19:16:43.710481 1 logging.cc:43] INVALID_STATE: std::exception
E0929 19:16:43.710676 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E0929 19:16:43.712234 1 model_repository_manager.cc:832] failed to load ‘jasper-trt-encoder-streaming’ version 1: Internal: unable to create TensorRT engine
I0929 19:16:47.113643 1 model_repository_manager.cc:829] successfully loaded ‘feature-extractor-trt-vad-streaming’ version 1
I0929 19:16:53.933496 1 model_repository_manager.cc:829] successfully loaded ‘ctc-decoder-cpu-trt-vad-streaming’ version 1
E0929 19:16:53.934104 1 model_repository_manager.cc:1087] Invalid argument: ensemble ‘jasper-asr-trt-ensemble-vad-streaming’ depends on ‘jasper-trt-decoder-streaming’ which has no loaded version
I0929 19:16:53.936571 1 model_repository_manager.cc:808] successfully unloaded ‘voice-activity-detector-trt-ctc-streaming’ version 1
I0929 19:16:54.137918 1 model_repository_manager.cc:808] successfully unloaded ‘feature-extractor-trt-vad-streaming’ version 1
I0929 19:16:54.249969 1 model_repository_manager.cc:808] successfully unloaded ‘ctc-decoder-cpu-trt-vad-streaming’ version 1
error: creating server: INTERNAL - failed to load all models