• Hardware Platform (Jetson / GPU) Tesla T4 • DeepStream Version 6.0 • JetPack Version (valid for Jetson only) • TensorRT Version 8.0.1 • NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
Hello, I have a problem with DeepStream 6 GA. I have working aplication in DeepStream 5.1 and also I tested this aplication with DS 6 EA and also without any problem. Now I want to migrate to DS-6 GA but I can’t run the pipeline. I builded all of the custom plugins and the problem is when I want to run processing. I use python to run DeepStream. My pipeline: uridecodebin->streammux->pgie->nvvideoconverter->videotemplate->nvosd->nvvideoconverter->capsfilter->encoder->codeparser->container->filesink and my model is YoloV4. This is my error:
Unknown or legacy key specified 'is-classifier' for group [property]
Library Opened Successfully
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.814439448 92 0x341e750 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.trt
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x608x608
1 OUTPUT kFLOAT boxes 22743x1x4
2 OUTPUT kFLOAT confs 22743x1
0:00:02.814514103 92 0x341e750 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.trt
0:00:02.858437317 92 0x341e750 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/config_infer_primary_yoloV4_face.txt sucessfully
0:00:03.189184845 92 0x312e800 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:03.189205464 92 0x312e800 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
You need to build the YoloV4 TRT engine in the DS6.0 environment.
DS 6.0 uses different TRT version as DS 5.1 and DS 6.0EA used, so the TRT engine built for DS 5.1 or DS 6.0EA can’t be used for DS6.0GA
I already did it but to be sure I converted it one more time but I saved engine with “.engine” extension (before it was “.trt”) and the warning is gone but error still exist:
Unknown or legacy key specified 'is-classifier' for group [property]
Library Opened Successfully
0:00:05.526918560 80 0x2d7e150 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x608x608
1 OUTPUT kFLOAT boxes 22743x1x4
2 OUTPUT kFLOAT confs 22743x1
0:00:05.526998241 80 0x2d7e150 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/yolo_face.engine
0:00:05.573256866 80 0x2d7e150 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/config_infer_primary_yoloV4_face.txt sucessfully
0:00:06.001721008 80 0x2a95800 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:06.001743611 80 0x2a95800 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Yes, with this pipeline it works but now I have another issue. I want to use NvDCF tracker and I used docker nvcr.io/nvidia/deepstream:6.0-triton to create environment but there is no library for NvDCF tracker .
it’s in libnvds_nvmultiobjecttracker.so ,please refer to /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/deepstream_app_config_yoloV3_tiny.txt about how to config nvDCF tracker
[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1