Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson • DeepStream Version
6.0 • JetPack Version (valid for Jetson only)
4.6.2-b5 • TensorRT Version
8.2.1.9-1+cuda10.2
Hi, I am trying to run the sample app deepstream-testsr on my jetson nano device over a rtsp stream. In the /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-testsr directory, I ran the following command
$ sudo ./deepstream-testsr-app <rtsp uri>
the ouptut looks like this
Using winsys: x11
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.780518796 29143 0x55d0d75b50 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.781702403 29143 0x55d0d75b50 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.781753758 29143 0x55d0d75b50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
0:01:12.884807634 29143 0x55d0d75b50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:01:12.943465781 29143 0x55d0d75b50 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstestsr_pgie_config.txt sucessfully
Running...
Recording started..
In cb_newpad
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE
0:01:13.582023866 29143 0x55d0d3ac00 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:01:13.582104961 29143 0x55d0d3ac00 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:dstest-sr-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
** ERROR: <RunUserCallback:207>: No video stream found
Returned, stopping playback
Deleting pipeline
The error shows that the video stream cannot be found, can you share the command of starting this program? Please also check the video stream status by command ffmpeg -i or gst-discoverer-1.0 .
there is an error “No video stream found”, please refer to \opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-testsr\README
as the point 4 said:
4. Smart record needs I-frames to record videos. So if in case
“No video stream found” error is encountered, it is quite possible that
the from a given rtsp source, I-frames are not received by the application,
for a given recording interval.Try changing the rtsp source or update the
above mentioned parameters accordingly.
I don’t think there is any problem with the rtsp source. Just now I tried running the deepstream-test5 app with smart video recording enabled for the same rtsp source. And the following configuration.
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=<username>@<ip>
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
smart-record=2
# 0 = mp4, 1 = mkv
smart-rec-container=0
smart-rec-file-prefix=smr
smart-rec-dir-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5
# smart record cache size in seconds
smart-rec-cache=15
# default duration of recording in seconds.
smart-rec-default-duration=10
# duration of recording in seconds.
# this will override default value.
smart-rec-duration=7
# seconds before the current time to start recording.
smart-rec-start-time=2
# value in seconds to dump video stream.
smart-rec-interval=7
The pipeline is working and it is generating an output video with all the tracked objects. But the output video of smart recording is just an empty mp4 file.
dose this happen every time? can you see the video by this command?
gst-launch-1.0 uridecodebin uri=xxx ! nvvideoconvert ! autovideosink.
I can’t reproduce this issue by the native test5 and your cfg, to narrow down this issue, can you check if I-frames can be got? or can you change the rtsp source? you might use a virtual rtspserver if no physical camera.
I tried running sudo gst-launch-1.0 uridecodebin uri=rtsp://<uname>@<ip> ! nvvideoconvert ! autovideosink
here is the output
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://<uname>@<ip>
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
There is no display device attached to nano, so I can’t tell if the video is played or not.
Regarding the test5 app, In the configuration settings posted earlier, I changed the smart-rec-container to mkv and smart recording is working fine now and can generate meaningful videos. I need to configure smart recording to start based on a local event such as an object detection; that’s why I’m trying to run the testsr app.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
sorry for the late reply, here is workaround, you might modify SMART_REC_CONTAINER to 1 in opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-testsr\deepstream_test_sr_app.c, it will use mkv.