Got errors when running the jetson-cloudnative-demo

When I tried to run the demo here https://github.com/NVIDIA-AI-IOT/jetson-cloudnative-demo, I can only run one demo [DeepStream Container with people detection], for the other three. I can not run it. the following shows the error info. I find a solution from here, Jetson Xavier NX L4T 32.4.3 Cloud-Native Demo Issues, I’m not sure if it is the solution for my problem. I tried to recompile the TensorRT engine first. but I don’t know how to do that, can you tell me? thank you!

(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_is_empty: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_truncate: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_fixate: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_caps_get_structure: assertion ‘GST_IS_CAPS (caps)’ failed

(python3:1): GStreamer-CRITICAL **: 19:07:47.726: gst_structure_get_string: assertion ‘structure != NULL’ failed

(python3:1): GStreamer-CRITICAL **: 19:07:47.727: gst_mini_object_unref: assertion ‘mini_object != NULL’ failed
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 960x544 (x 11), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 960, nFrameHeight = 540
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (896) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=3, duration=-1

Using winsys: x11
[TensorRT] ERROR: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “run_pose_pipeline.py”, line 37, in
engine = PoseEngine(ENGINE_PATH)
File “/pose/pose.py”, line 86, in init
self.module.load_state_dict(torch.load(path))
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 830, in load_state_dict
load(self)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 825, in load
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
File “/usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py”, line 309, in _load_from_state_dict
self.context = self.engine.create_execution_context()
AttributeError: ‘NoneType’ object has no attribute ‘create_execution_context’

for the Voice Demo, I got the following errors.
$ sudo docker run --runtime nvidia -it --rm --network host nvcr.io/nvidia/jetson-voice:r32.4.2 trtserver --model-control-mode=none --model-repository=models/repository/jasper-asr-streaming-vad/
I0929 19:16:40.715338 1 server.cc:115] Initializing TensorRT Inference Server
I0929 19:16:40.880480 1 server_status.cc:55] New status tracking for model ‘ctc-decoder-cpu-trt-vad-streaming’
I0929 19:16:40.880605 1 server_status.cc:55] New status tracking for model ‘feature-extractor-trt-vad-streaming’
I0929 19:16:40.880688 1 server_status.cc:55] New status tracking for model ‘jasper-asr-trt-ensemble-vad-streaming’
I0929 19:16:40.880843 1 server_status.cc:55] New status tracking for model ‘jasper-trt-decoder-streaming’
I0929 19:16:40.881277 1 server_status.cc:55] New status tracking for model ‘jasper-trt-encoder-streaming’
I0929 19:16:40.881331 1 server_status.cc:55] New status tracking for model ‘voice-activity-detector-trt-ctc-streaming’
I0929 19:16:40.881751 1 model_repository_manager.cc:675] loading: ctc-decoder-cpu-trt-vad-streaming:1
I0929 19:16:40.882264 1 model_repository_manager.cc:675] loading: feature-extractor-trt-vad-streaming:1
I0929 19:16:40.882641 1 model_repository_manager.cc:675] loading: jasper-trt-decoder-streaming:1
I0929 19:16:40.882993 1 custom_backend.cc:202] Creating instance ctc-decoder-cpu-trt-vad-streaming_0_0_cpu on CPU using libctcdecoder-cpu.so
I0929 19:16:40.883064 1 model_repository_manager.cc:675] loading: jasper-trt-encoder-streaming:1
I0929 19:16:40.883368 1 custom_backend.cc:205] Creating instance feature-extractor-trt-vad-streaming_0_gpu0 on GPU 0 (7.2) using libfeature-extractor.so
I0929 19:16:40.883559 1 model_repository_manager.cc:675] loading: voice-activity-detector-trt-ctc-streaming:1
I0929 19:16:40.884932 1 custom_backend.cc:205] Creating instance voice-activity-detector-trt-ctc-streaming_0_gpu0 on GPU 0 (7.2) using libvoice-activity-detector.so
I0929 19:16:40.930904 1 model_repository_manager.cc:829] successfully loaded ‘voice-activity-detector-trt-ctc-streaming’ version 1
E0929 19:16:43.708240 1 logging.cc:43] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
E0929 19:16:43.709205 1 logging.cc:43] INVALID_STATE: std::exception
E0929 19:16:43.709377 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E0929 19:16:43.709662 1 model_repository_manager.cc:832] failed to load ‘jasper-trt-decoder-streaming’ version 1: Internal: unable to create TensorRT engine
E0929 19:16:43.710278 1 logging.cc:43] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
E0929 19:16:43.710481 1 logging.cc:43] INVALID_STATE: std::exception
E0929 19:16:43.710676 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E0929 19:16:43.712234 1 model_repository_manager.cc:832] failed to load ‘jasper-trt-encoder-streaming’ version 1: Internal: unable to create TensorRT engine
I0929 19:16:47.113643 1 model_repository_manager.cc:829] successfully loaded ‘feature-extractor-trt-vad-streaming’ version 1
I0929 19:16:53.933496 1 model_repository_manager.cc:829] successfully loaded ‘ctc-decoder-cpu-trt-vad-streaming’ version 1
E0929 19:16:53.934104 1 model_repository_manager.cc:1087] Invalid argument: ensemble ‘jasper-asr-trt-ensemble-vad-streaming’ depends on ‘jasper-trt-decoder-streaming’ which has no loaded version
I0929 19:16:53.936571 1 model_repository_manager.cc:808] successfully unloaded ‘voice-activity-detector-trt-ctc-streaming’ version 1
I0929 19:16:54.137918 1 model_repository_manager.cc:808] successfully unloaded ‘feature-extractor-trt-vad-streaming’ version 1
I0929 19:16:54.249969 1 model_repository_manager.cc:808] successfully unloaded ‘ctc-decoder-cpu-trt-vad-streaming’ version 1
error: creating server: INTERNAL - failed to load all models

Hi @zhidong.su, those demo containers are for L4T R32.4.2 (JetPack 4.4 Developer Preview), so please use that version of JetPack to run the demo.

Hi, @dusty_nv, thank you for your reply. I have already install the following the instruction here https://developer.nvidia.com/embedded/jetpack, I download Jetson Xavier NX Developer Kit SD Card image. then flash to the microSD. Then I move the system to NVMe Drive. But I got the same errors.

Can you download and flash the L4T R32.4.2 image? Here is the link - https://developer.nvidia.com/jetson-nx-developer-kit-sd-card-image-44-dp

I am running the cloud native demo on the Xavier NX having following version Info :

NVIDIA Jetson Xavier NX (Developer Kit Version)
L4T 32.4.3 [ JetPack 4.4 ]
Ubuntu 18.04.5 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.2.89
CUDA Architecture: 7.2
OpenCV version: 4.1.1
OpenCV Cuda: NO
CUDNN: 8.0.0.180
TensorRT: 7.1.3.0
Vision Works: 1.6.0.501
VPI: 0.3.7
But even though the demo is not working. Only one container(DeepStream) is launched properly and for voice the container gets launched but I am unable to interact with it.
I am getting the following error :

./run_demo.sh
This demo will launch 4 containerized models. It will take arround 2.5 minutes for the demo to fully launch.
Please close all applications like web browser,etc, before launching this demo
Press [Enter] after making sure that the USB Headset with mic is connected to Jetson Xavier NX Developer Kit and the mic input is enabled in Ubuntu sound settings
access control disabled, clients can connect from any host
[sudo] password for driverless:
Launching DeepStream Container
Launching TRTIS server
Launching Voice Container
Launching Pose Container
Launching Gaze Container
Firing up inference engines
Arranging windows …
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 18 (X_ChangeProperty)
Resource id in failed request: 0x3e00001
Serial number of failed request: 13
Current serial number in output stream: 15
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 18 (X_ChangeProperty)
Resource id in failed request: 0x4200001
Serial number of failed request: 13
Current serial number in output stream: 15
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 3 (X_GetWindowAttributes)
Resource id in failed request: 0x3e00001
Serial number of failed request: 23
Current serial number in output stream: 24
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 12 (X_ConfigureWindow)
Resource id in failed request: 0x3e00001
Serial number of failed request: 18
Current serial number in output stream: 20
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 3 (X_GetWindowAttributes)
Resource id in failed request: 0x4200001
Serial number of failed request: 23
Current serial number in output stream: 24
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 12 (X_ConfigureWindow)
Resource id in failed request: 0x4200001
Serial number of failed request: 18
Current serial number in output stream: 20
Press [Enter] key to exit and kill the demo

Hi babsikh,

Please help to open a new topic for your issue. Thanks

1 Like