Internal data stream error while running the deepstream-testsr-app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6.2-b5
• TensorRT Version
8.2.1.9-1+cuda10.2

Hi, I am trying to run the sample app deepstream-testsr on my jetson nano device over a rtsp stream. In the /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-testsr directory, I ran the following command

$ sudo ./deepstream-testsr-app <rtsp uri>

the ouptut looks like this

Using winsys: x11 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.780518796 29143   0x55d0d75b50 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.781702403 29143   0x55d0d75b50 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.781753758 29143   0x55d0d75b50 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
0:01:12.884807634 29143   0x55d0d75b50 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:01:12.943465781 29143   0x55d0d75b50 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstestsr_pgie_config.txt sucessfully
Running...
Recording started..
In cb_newpad
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 0 
NVMEDIA_ENC: bBlitMode is set to TRUE 
0:01:13.582023866 29143   0x55d0d3ac00 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:01:13.582104961 29143   0x55d0d3ac00 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:dstest-sr-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
** ERROR: <RunUserCallback:207>: No video stream found
Returned, stopping playback
Deleting pipeline

Any idea how to solve this? Thanks.

The error shows that the video stream cannot be found, can you share the command of starting this program? Please also check the video stream status by command ffmpeg -i or gst-discoverer-1.0 .

Hi, this is the command I used for starting the program (I have edited to hide the rtsp uri)

secquraise@ubuntu:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-testsr$ sudo ./deepstream-testsr-app rtsp://<username@ip>

and this is the output of gst-discoverer-1.0

secquraise@ubuntu:~$ gst-discoverer-1.0 rtsp://<username@ip>
Analyzing rtsp://<username@ip>
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Done discovering rtsp://<username@ip>
Analyzing URI timed out

Topology:
  container: application/rtsp
    unknown: application/x-rtp
      video: H.264 (High Profile)

Properties:
  Duration: 99:99:99.999999999
  Seekable: no
  Live: yes
  Tags: 
      video codec: H.264 (High Profile)

  1. did you modify the code and configuration?
  2. need more logs, please execute “export GST_DEBUG=6”, then run again, you might redirect the log to a file, for example, app > 1.log 2>1.log.
  1. I have modified the dstestsr_pgie_config.txt file to add the file paths like this
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2

[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5

rest is unmodified.

  1. After executing “export GST_DEBUG=6” and running again the output looks like this
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.922491758 18962   0x5571b3ef50 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.923799314 18962   0x5571b3ef50 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.923857596 18962   0x5571b3ef50 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
0:01:20.980713277 18962   0x5571b3ef50 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
0:01:21.330217405 18962   0x5571b3ef50 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstestsr_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 0 
0:01:22.540183004 18962   0x5571b03c00 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:01:22.540240348 18962   0x5571b03c00 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:dstest-sr-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
NVMEDIA_ENC: bBlitMode is set to TRUE 

And here are the contents of the log file

secquraise@ubuntu:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-testsr$ cat 1.log
Now playing: rtsp://<username>@<ip>

Using winsys: x11 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

Running...
Recording started..
In cb_newpad
** ERROR: <RunUserCallback:207>: No video stream found
Returned, stopping playback
Deleting pipeline

there is an error “No video stream found”, please refer to \opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-testsr\README
as the point 4 said:
4. Smart record needs I-frames to record videos. So if in case
“No video stream found” error is encountered, it is quite possible that
the from a given rtsp source, I-frames are not received by the application,
for a given recording interval.Try changing the rtsp source or update the
above mentioned parameters accordingly.

I don’t think there is any problem with the rtsp source. Just now I tried running the deepstream-test5 app with smart video recording enabled for the same rtsp source. And the following configuration.

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=<username>@<ip>
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
smart-record=2
# 0 = mp4, 1 = mkv
smart-rec-container=0
smart-rec-file-prefix=smr
smart-rec-dir-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5
# smart record cache size in seconds
smart-rec-cache=15
# default duration of recording in seconds.
smart-rec-default-duration=10
# duration of recording in seconds.
# this will override default value.
smart-rec-duration=7
# seconds before the current time to start recording.
smart-rec-start-time=2
# value in seconds to dump video stream.
smart-rec-interval=7

The pipeline is working and it is generating an output video with all the tracked objects. But the output video of smart recording is just an empty mp4 file.

dose this happen every time? can you see the video by this command?
gst-launch-1.0 uridecodebin uri=xxx ! nvvideoconvert ! autovideosink.

I can’t reproduce this issue by the native test5 and your cfg, to narrow down this issue, can you check if I-frames can be got? or can you change the rtsp source? you might use a virtual rtspserver if no physical camera.

Hi

  1. I tried running sudo gst-launch-1.0 uridecodebin uri=rtsp://<uname>@<ip> ! nvvideoconvert ! autovideosink
    here is the output
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://<uname>@<ip>
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

There is no display device attached to nano, so I can’t tell if the video is played or not.

  1. Regarding the test5 app, In the configuration settings posted earlier, I changed the smart-rec-container to mkv and smart recording is working fine now and can generate meaningful videos. I need to configure smart recording to start based on a local event such as an object detection; that’s why I’m trying to run the testsr app.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

sorry for the late reply, here is workaround, you might modify SMART_REC_CONTAINER to 1 in opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-testsr\deepstream_test_sr_app.c, it will use mkv.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.