Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
NVIDIA AGX Xavier • DeepStream Version
6.2 • JetPack Version (valid for Jetson only)
5.1 • TensorRT Version
8.5.2.2
When running the code ($ /opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps/apps/deepstream-test1-rtsp-out$ python3 deepstream_test1_rtsp_out.py -i /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_720p.mp4) through the console or through the graphical shell, everything works fine.
WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:00:53.771687865 332764 0x1177aa70 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:53.882259100 332764 0x1177aa70 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Frame Number=0 Number of Objects=10 Vehicle_count=6 Person_count=4
H264: Profile = 66, Level = 0
Frame Number=1 Number of Objects=9 Vehicle_count=5 Person_count=4
Frame Number=2 Number of Objects=8 Vehicle_count=4 Person_count=4
NVMEDIA: Need to set EMC bandwidth : 846000
Frame Number=3 Number of Objects=11 Vehicle_count=5 Person_count=6
NVMEDIA: Need to set EMC bandwidth : 846000
NVMEDIA_ENC: bBlitMode is set to TRUE
Frame Number=4 Number of Objects=7 Vehicle_count=4 Person_count=3
Frame Number=5 Number of Objects=8 Vehicle_count=5 Person_count=3
Frame Number=6 Number of Objects=10 Vehicle_count=6 Person_count=4
Frame Number=7 Number of Objects=10 Vehicle_count=6 Person_count=4
And when you run the same command in ssh client mode, an error occurs.
923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:00:53.623642790 445036 0xff13470 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:53.889648070 445036 0xff13470 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Error: gst-stream-error-quark: Internal data stream error. (1): gstbaseparse.c(3666): gst_base_parse_loop (): /GstPipeline:pipeline0/GstH264Parse:h264-parser:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0