Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.5
• NVIDIA GPU Driver Version (valid for GPU only)
535.54.03
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I installed the deepstream6.3 operating environment on the A40 server, and then ran the python code developed based on deepstream6.1.1, but the code could not be executed normally.
root@SSTY-001:/home# GST_DEBUG=3 python3 start_program.py, the exception is as follows:
log.txt (15.1 KB)
root@SSTY-001:/home/cv# deepstream-app --version-all
deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.1
TensorRT Version: 8.5
cuDNN Version: 8.7
libNVWarp360 Version: 2.0.1d3
I ran the c version of deepstream_test1 and the python version of deepstream_test1 on the A40 server respectively, and they both executed normally. The logs are as follows
root@SSTY-001:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
c.txt (112.8 KB)
root@SSTY-001:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1# python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
python.txt (102.0 KB)
The python code developed based on 6.1.1 can be executed normally on the 3090 server. Now I am confused. Is this a bug in the deepstream 6.3 version? Please help me
Python bindings is tested on DS-6.3.
If deepstream_test_1.py works fine, there should be no problem with the binding.
From the logs you provided, it seems that there are some errors in the RTSP stream.
Does it work if you change the source to a sample stream?
0:00:01.143165847 583715 0x7f3f7c007240 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop:<udpsrc4> error: Internal data stream error.
0:00:01.143197598 583715 0x7f3f7c007240 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop:<udpsrc4> error: streaming stopped, reason not-linked (-1)
0:00:01.172914268 583715 0x7f3f80061e40 WARN v4l2videodec gstv4l2videodec.c:2305:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:01.172956415 583715 0x7f3f80061e40 WARN v4l2bufferpool
Thank you for your reply, please tell me how to change it? The following is my code snippet
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if streammux is None:
logging.info('streammux is None')
return
pipeline.add(streammux)
new_filter1_src_pad_buffer_probe = functools.partial(filter1_src_pad_buffer_probe, camera_list=camera_list,
is_changed_queue=is_changed_queue,
weekday=weekday, redis_client=redis_client, domain=domain,
name_space=name_space)
for i in range(number_sources):
# uri_name = camera_list[i]['rtsp']
uri_name = '/opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264'
source_bin = create_source_bin(i, uri_name)
rtsp_list.append(uri_name)
if not source_bin:
logging.error("Unable to create source bin")
g_source_bin_list.append(source_bin)
pipeline.add(source_bin)
queue1 = Gst.ElementFactory.make("queue", "queue1")
pipeline.add(queue1)
def create_source_bin(index, filename):
# Create a source GstBin to abstract this bin's content from the rest of the
# pipeline
bin_name = "source-bin-%02d" % index
logging.info(f"bin_name:{bin_name}")
# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.
bin = Gst.ElementFactory.make("uridecodebin", bin_name)
if not bin:
logging.error(" Unable to create uri decode bin \n")
# We set the input uri to the source element
bin.set_property("uri", filename)
# Connect to the "pad-added" signal of the decodebin which generates a
# callback once a new pad for raw data has been created by the decodebin
bin.connect("pad-added", cb_newpad, index)
bin.connect("child-added", decodebin_child_added, index)
return bin
'‘Cuda failure: status=801’'This exception looks very strange, I searched many places but did not find a similar case
just like uri_name = 'file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264'
verion 525.125.06 is recommend for DS-6.3, You can refer this table.
cudaErrorNotSupported = 801
This error indicates the attempted operation is not supported on the current system or device.
I tried this method and it still fails, still ‘Cuda failure: status=801’, yes, my graphics card driver is higher than the DS6.3 version, and for some reasons the version of the graphics card driver on the A40 cannot be lowered , if it is because the ‘535.54.03’ driver version is too high to run my custom python code, then why can I run depstream_test1? Doesn’t it mean that the running environment is fine if I can run depstream_test1?
I pass my another server 3090, it is normal, run the pull flow python code, the log output is as follows
root@Precision-3660:/home/deepstream_source# GST_DEBUG=3 python3 start_program.py
3090_log.txt (11.7 KB)
Usually it can be proved that the running environment is ok.
But currently CUDA is reporting errors, usually caused by the driver.
Really sad news, this problem has been bothering me for a long time, can you help me to solve it?
Can the following command line run normally ? If ok,maybe there are some problems in your python code.
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder \
! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt \
batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so \
! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 \
! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nveglglessink
My file path is /opt/nvidia/deepstream/deepstream-6.3, and the output of executing your command is as follows:
Setting pipeline to PAUSED ...
0:00:02.843771774 1077501 0x5647eb636870 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 12x1x1
0:00:02.967151966 1077501 0x5647eb636870 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:02.971738597 1077501 0x5647eb636870 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer1> [UID 2]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.3/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] Empty config file path is provided. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:05.478486643 1077501 0x5647eb636870 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:05.606546343 1077501 0x5647eb636870 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
0:00:05.611139990 1077501 0x5647eb636870 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
0:00:06.401232520 1077501 0x5647ea0c4de0 WARN nvinfer gstnvinfer.cpp:2214:gst_nvinfer_submit_input_buffer:<nvinfer1> error: Internal data stream error.
0:00:06.401258920 1077501 0x5647ea0c4de0 WARN nvinfer gstnvinfer.cpp:2214:gst_nvinfer_submit_input_buffer:<nvinfer1> error: streaming stopped, reason not-negotiated (-4)
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer1: Internal data stream error.
Additional debug info:
gstnvinfer.cpp(2214): gst_nvinfer_submit_input_buffer (): /GstPipeline:pipeline0/GstNvInfer:nvinfer1:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
[NvMultiObjectTracker] De-initialized
Freeing pipeline ...
It does not matter. /opt/nvidia/deepstream/deepstream
is a soft link for /opt/nvidia/deepstream/deepstream-6.3
.
Do you have a monitor ? If no, try change nveglglessink
to fakesink
.
If yes, export DISPLAY=:0
must be executed before run this command line.
Thanks for your guidance, I don’t have a monitor, I have replaced it with fakesink, the log output is as follows:
Setting pipeline to PAUSED ...
0:00:02.815060806 1094860 0x559aa46ebe90 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 12x1x1
0:00:02.941032389 1094860 0x559aa46ebe90 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:02.945841516 1094860 0x559aa46ebe90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer1> [UID 2]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] Empty config file path is provided. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:05.453458633 1094860 0x559aa46ebe90 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:05.581592790 1094860 0x559aa46ebe90 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
0:00:05.586195311 1094860 0x559aa46ebe90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
nvstreammux: Successfully handled EOS for source_id=0
Got EOS from element "pipeline0".
Execution ended after 0:00:02.049730718
Setting pipeline to NULL ...
[NvMultiObjectTracker] De-initialized
Freeing pipeline ...
Looks normal.
Can you share your python code and configuration file ?
Thank you so much, you gave me hope
Hi, did you find anything wrong with the code?
I can’t run you code because miss some files,but I check you code and find the below error.
batched-push-timeout
is calculated by us
streammux.set_property('batched-push-timeout', 1 / 25)
Maybe batched-push-timeout= 40000
is your want.
But I think it can’t solve cuda error.
If you do not want access buffer by cpu(such as opencv), It is unnecessary.
if not is_aarch64():
mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
streammux.set_property("nvbuf-memory-type", mem_type)
nvvidconv1.set_property("nvbuf-memory-type", mem_type)
Thank you for your suggestion. What file is missing if the code cannot be run? Maybe I can make some modifications to facilitate the running of the code, because I am not very familiar with deepstream, and it will be a headache to locate the problem by myself