Issue While running the yolov7

While Running the deepstream using yolo model there is a issue with some scaling factor (snap attached)

• Hardware Platform (GPU RTX 3090)
• DeepStream Version = 7.0
• TensorRT Version = 10.13.2.6
• NVIDIA GPU Driver Version = 550.163.01

Here is the Config File :

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov7.onnx
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=coco.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Could you try our sample deepstream_tools?

Thanks,

now it is running perfectly for One stream

But when i am trying to run 2 Streams it gives me an error,

I have tried the model with dynamic output and batch-size output both but it is not working

Error is like this

ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, (int32_t x) noexcept { return x >= 0; }))ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, (int32_t x) noexcept { return x >= 0; }))ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, (int32_t x) noexcept { return x >= 0; }))ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1439 Explicit config dims is invalidERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1120 Failed to configure builder optionsERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:821 failed to build trt engine.0:00:10.539849682 20739 0x608d9c2de5f0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed0:00:10.775039587 20739 0x608d9c2de5f0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2215> [UID = 1]: build backend context failed0:00:10.776133848 20739 0x608d9c2de5f0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1352> [UID = 1]: generate backend failed, check config file settings0:00:10.778514626 20739 0x608d9c2de5f0 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Failed to create NvDsInferContext instance0:00:10.778526709 20739 0x608d9c2de5f0 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV7.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED[NvMultiObjectTracker] De-initialized

**PERF:  {‘stream0’: 0.0}
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(912): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:Config file path: config_infer_primary_yoloV7.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILEDExiting app 

Are you using the sample I attached or your own model? If you are using your own model, it might be that there is a problem with the dimensions of your model. You need to investigate it by yourself.

Thanks,The issue was resolved and traced back to file conversion error.

Now, i would like to understand how nvdsanalytics operates in conjunction with PGIE and SGIE with in deepstream pipeline

Glad to hear that. If you have any other new questions, you can file a new topic.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.