Cannot run example in deepstream docker container

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)jetson xavier nx(developer kit version)
• DeepStream Version DeepStream-6.0.1
• JetPack Version (valid for Jetson only)jetpack4.6.2 (L4T 32.7.2)
• TensorRT Version8.2.1.8
CUDA Version
10.2.300
cuDNN Version
8.2.1.32
OPENCV Version
4.1.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) I run a deepstream docker container,but it doesnot work when I run the deepstream-app.
following is the message:
/deepstream-app# deepstream-app -c …/…/…/…/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)

(gst-plugin-scanner:4811): GStreamer-WARNING **: 07:59:01.512: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:4811): GStreamer-WARNING **: 07:59:01.617: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtritonserver.so: cannot open shared object file: No such file or directory

Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:04.950211504 4810 0x362d88f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 6]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:04.978698820 4810 0x362d88f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 6]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:04.978839783 4810 0x362d88f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
ERROR: [TRT]: 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:08.685489881 4810 0x362d88f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 6]: build engine file failed
0:00:08.714203663 4810 0x362d88f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 6]: build backend context failed
0:00:08.714319665 4810 0x362d88f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 6]: generate backend failed, check config file settings
0:00:08.714559607 4810 0x362d88f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<secondary_gie_2> error: Failed to create NvDsInferContext instance
0:00:08.714669114 4810 0x362d88f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<secondary_gie_2> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_secondary_carmake.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:707: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_2: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_2:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_secondary_carmake.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

I have migrated system to additional disk,is this the problem?
ERROR: [TRT]: 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)

It’s a known issue for JP 4.6.1 for NX 16G, you can upgrade to latest JP version, or you can check this topic, there temorary workaround.
Runtime error with Deepstream 6.0.1 while executing examples - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Thanks so much!I will try it next week.

There’s nothing in the 百度网盘-链接不存在.
Can you give me another link?

Can you give me another link?

链接:百度网盘 请输入提取码
提取码:erfp

Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.