Deepstream 6.4

Hardware Platform (Jetson / GPU) : NVIDIA A2
• DeepStream Version : 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version : 12.2
• NVIDIA GPU Driver Version (valid for GPU only) : 535.113.01
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am trying to run deepstream_test1.py

python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264

output

WARNING: [TRT]: Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor block_4b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor output_bbox/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor output_cov/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor output_cov/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor

0:03:25.288393535 15110 0x55bb9eef3ad0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:03:25.541761595 15110 0x55bb9eef3ad0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=13 Vehicle_count=8 Person_count=5
0:03:26.131818742 15110 0x55bb9ee88a40 WARN nvinfer gstnvinfer.cpp:2406:gst_nvinfer_output_loop: error: Internal data stream error.
0:03:26.131845352 15110 0x55bb9ee88a40 WARN nvinfer gstnvinfer.cpp:2406:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2406): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=14 Vehicle_count=8 Person_count=6

Why video stream stopped and produces this issue?

I am using docker container and used below steps

docker run -it --rm --gpus all --name ds-new-img nvcr.io/nvidia/deepstream:6.4-triton-multiarch bash

/opt/nvidia/deepstream/deepstream/user_additional_install.sh

mkdir -p /opt/nvidia/deepstream/deepstream-6.3/sources/inference/bindings/export_pyds

cd /opt/nvidia/deepstream/deepstream-6.3/sources/inference/bindings/export_pyds/ && wget https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/download/v1.1.10/pyds-1.1.10-py3-none-linux_x86_64.whl

pip3 install ./pyds-1.1.10-py3-none-linux_x86_64.whl

/opt/nvidia/deepstream/deepstream/update_rtpmanager.sh

Noticed you’re using an A2 GPU, this card doesn’t seem to have video output.

Modify the code as follows

 #  sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
  sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")

If you want to see the output, encode to file or stream to rtsp

I faced an issue when test with RTSP source and deepstream-test3.py after update line you mentioned

command

python3 deepstream_test_3.py -i rtsp://10.1.118.105:6008/stream --no-display --pgie nvinferserver -c /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt

Output

{‘input’: [‘rtsp://10.1.118.105:6008/stream’], ‘configfile’: ‘/opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt’, ‘pgie’: ‘nvinferserver’, ‘no_display’: True, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
number of sources 1
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test3/deepstream_test_3.py:238: DeprecationWarning: Gst.Element.get_request_pad is deprecated
sinkpad= streammux.get_request_pad(padname)
Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Creating Fakesink

Creating Code Parser

Creating Container

Creating Sink

At least one of the sources is live
WARNING: Overriding infer-config batch-size 0 with number of sources 1

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
0 : rtsp://10.1.118.105:6008/stream
Starting pipeline

0:00:03.537154012 2070 0x5585dcbebaa0 WARN nvinferserver gstnvinferserver_impl.cpp:360:validatePluginConfig: warning: Configuration file batch-size reset to: 1
WARNING: infer_proto_utils.cpp:144 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: peoplenet
Decodebin child added: source

Warning: gst-library-error-quark: Configuration file batch-size reset to: 1 (5): gstnvinferserver_impl.cpp(360): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Aborted (core dumped)

also I uploaded code that I am using
deepstream_test_3.txt (19.4 KB)

configuration files
config_triton_infer_primary_peoplenet.txt (1.2 KB)
configpbtxt.txt (930 Bytes)

Try the following command to start docker

docker run --gpus all --name ds-new-img --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream nvcr.io/nvidia/deepstream:6.4-triton-multiarch 
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.