DeepStream 6.0 - streaming stopped, reason not-negotiated (-4)

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01

Hello, I have a problem with DeepStream 6 GA. I have working aplication in DeepStream 5.1 and also I tested this aplication with DS 6 EA and also without any problem. Now I want to migrate to DS-6 GA but I can’t run the pipeline. I builded all of the custom plugins and the problem is when I want to run processing. I use python to run DeepStream. My pipeline: uridecodebin->streammux->pgie->nvvideoconverter->videotemplate->nvosd->nvvideoconverter->capsfilter->encoder->codeparser->container->filesink and my model is YoloV4. This is my error:

Unknown or legacy key specified 'is-classifier' for group [property]
Library Opened Successfully
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.814439448    92      0x341e750 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.trt
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608
1   OUTPUT kFLOAT boxes           22743x1x4
2   OUTPUT kFLOAT confs           22743x1

0:00:02.814514103    92      0x341e750 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.trt
0:00:02.858437317    92      0x341e750 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/config_infer_primary_yoloV4_face.txt sucessfully
0:00:03.189184845    92      0x312e800 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:03.189205464    92      0x312e800 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)

Can you please help me?

You need to build the YoloV4 TRT engine in the DS6.0 environment.
DS 6.0 uses different TRT version as DS 5.1 and DS 6.0EA used, so the TRT engine built for DS 5.1 or DS 6.0EA can’t be used for DS6.0GA

I already did it but to be sure I converted it one more time but I saved engine with “.engine” extension (before it was “.trt”) and the warning is gone but error still exist:

Unknown or legacy key specified 'is-classifier' for group [property]
Library Opened Successfully
0:00:05.526918560    80      0x2d7e150 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608
1   OUTPUT kFLOAT boxes           22743x1x4
2   OUTPUT kFLOAT confs           22743x1

0:00:05.526998241    80      0x2d7e150 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.engine
0:00:05.573256866    80      0x2d7e150 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/config_infer_primary_yoloV4_face.txt sucessfully
0:00:06.001721008    80      0x2a95800 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:06.001743611    80      0x2a95800 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)

From this, seems there is negotiation issue in your pipeline.

Could you try “uridecodebin->streammux->pgie->nvvideoconverter–>fakesink” this pipeline firstly?

Yes, with this pipeline it works but now I have another issue. I want to use NvDCF tracker and I used docker nvcr.io/nvidia/deepstream:6.0-triton to create environment but there is no library for NvDCF tracker .

root@76610ffe9fdd:/opt/nvidia/deepstream/deepstream/lib# ls
build                        libnvds_audiotransform.so              libnvds_logger.so                libnvdsgst_bufferpool.so
cvcore_libs                  libnvds_azure_edge_proto.so            libnvds_meta.so                  libnvdsgst_helper.so
gst-plugins                  libnvds_azure_proto.so                 libnvds_msgbroker.so             libnvdsgst_inferbase.so
libcuvidv4l2.so              libnvds_batch_jpegenc.so               libnvds_msgconv.so               libnvdsgst_meta.so
libhiredis.a                 libnvds_csvparser.so                   libnvds_msgconv_audio.so         libnvdsgst_smartrecord.so
libhiredis.so                libnvds_custom_sequence_preprocess.so  libnvds_nvmultiobjecttracker.so  libnvdsgst_tensor.so
libhiredis.so.1.0.1-dev      libnvds_dewarper.so                    libnvds_nvtxhelper.so            libnvdsinfer_custom_impl_Yolo.so
libhiredis.so.1.0.1-dev-ssl  libnvds_dsanalytics.so                 libnvds_opticalflow_dgpu.so      libnvdsinfer_custom_impl_fasterRCNN.so
libhiredis_ssl.a             libnvds_infer.so                       libnvds_osd.so                   libnvdsinfer_custom_impl_ssd.so
libhiredis_ssl.so            libnvds_infer_custom_parser_audio.so   libnvds_redis_proto.so           libnvv4l2.so
libiothub_client.so          libnvds_infer_server.so                libnvds_riva_asr_grpc.so         libnvv4lconvert.so
libiothub_client.so.1        libnvds_infercustomparser.so           libnvds_riva_tts.so              libnvvpi.so.1
libnvbuf_fdmap.so            libnvds_inferlogger.so                 libnvds_speech_riva.so           libnvvpi.so.1.1.12
libnvbufsurface.so           libnvds_inferutils.so                  libnvds_utils.so                 libv4l
libnvbufsurftransform.so     libnvds_kafka_proto.so                 libnvdsbufferpool.so             pyds.so
libnvds_amqp_proto.so        libnvds_lljpegdec.so                   libnvdsgst_audio.so              setup.py

So what should I do be able to use NvDCF tracker? Based on your example the library should be in that localization https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test2

it’s in libnvds_nvmultiobjecttracker.so ,please refer to /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/deepstream_app_config_yoloV3_tiny.txt about how to config nvDCF tracker


[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.