Error when runinng deepstream_app_source1_mrcnn.txt

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson nano
• DeepStream Version
DeepStreamSDK 5.1.0
• JetPack Version (valid for Jetson only)
nvidia-l4t-core 32.5.1-20210219084526
• TensorRT Version
TensorRT development libraries and headers
ii libnvinfer-doc 7.1.3-1+cuda10.2 all TensorRT documentation
• NVIDIA GPU Driver Version (valid for GPU only)
/etc/tegra_release
R32 (release), REVISION: 5.1, GCID: 26202423, BOARD: t210ref, EABI: aarch64, DATE: Fri Feb 19 16:45:52 UTC 2021
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,

I’m running the following command via ssh from macOS 11.2.3 to my jetson nano:

sudo deepstream-app -c ./configs/tlt_pretrained_models/deepstream_app_source1_mrcnn.txt

and I obtain the following error:

Using winsys: x11
0:00:07.887629785 10714 0x37374e00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT Input 3x832x1344
1 OUTPUT kFLOAT generate_detections 100x6
2 OUTPUT kFLOAT mask_head/mask_fcn_logits/BiasAdd 100x2x28x28

0:00:07.887837342 10714 0x37374e00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine
0:00:08.104361586 10714 0x37374e00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/config_infer_primary_mrcnn.txt sucessfully
** ERROR: main:675: Could not open X Display
Quitting
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
App run failed

I have download the .etlt model from

and installed the TRT OSS plugin 7.1

Do you have any idea how to solve this problem?

Do you run the application with display from remote connection? There is no display.

Please disable display by setting “type=1” in [sink0] group in deepstream_app_source1_mrcnn.txt file and run.

Thank you very much!!!, that solved the problem.

1 Like