Config_infer_secondary_carcolor.txt failing

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Tesla T4
• DeepStream Version - 6.2
• JetPack Version (valid for Jetson only)
**• TensorRT Version - 8.5.2.2 **
• NVIDIA GPU Driver Version (valid for GPU only) - 525.85.12
• Issue Type( questions, new requirements, bugs) - Query
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

I am trying to run my application where I make multiple inferences on a single input stream. I am trying to do this using gstreamer command line.
In order to do this, I just added “nvinfer config-file-path=<path-to-config_infer_secondary_carcolor.txt” after the “nvinfer config-file-path=<path-to-config_infer_primary.txt>” command. This however fails, with the error -

0:00:16.539753498 122 0x56416f305d30 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 0]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1185> [UID = 0]: Unique ID not set
0:00:16.539787593 122 0x56416f305d30 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:16.539796726 122 0x56416f305d30 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: <path-to-config_infer_secondary_carmake.txt>, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

I have not made any changes in the respective file. Can you please suggest if I am giving the wrong command or there is some issue with the file itself?

Thanks

Did you set gie-unique-id in sgie config file? You can compare your config with config file in deepstream_test2.

Which app are you using? What is the command line?

As yingliu mentioned.deepstream-test2 is multiple inference which you want.
this command use pgie and sgie in deepstream-test2 can work fine.

 gst-launch-1.0 -e nvstreammux name=mux batch-size=1 width=1280 height=720  \ 
! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_pgie_config.yml  \
! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml tracker-width=640 tracker-height=384 \ 
! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie1_config.yml  \
! nvvideoconvert \ 
! "video/x-raw(memory:NVMM), format=RGBA"  \
! nvdsosd ! nvvideoconvert ! nv3dsink filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264  \
! h264parse ! nvv4l2decoder ! queue ! mux.sink_0

Thank you for the suggestion. It seems that the issue was indeed coming because of the absence of the “gie-unique-id” value in the config file. Adding the same is not showing the error anymore.

After adding the “gie-unique-id” value in the respective config file, the error is not coming anymore. However, when I run the above command for my application, I do see the first inference and the tracker information but not the second inference.
I did compare my local sgie config file with the deepstream2 tutorial’s and I have put all the required files. Is there something else I might be missing?

Do you use the sample h264 stream ?Make sure the sample code works fine first.

It may be related to your tracker configuration.Can you upload some streams ?

Found the issue. It seems that the other inferences were only coming when the object moves closer to the camera.
However, I observed that the second inference doesn’t appear if I remove the tracker. I don’t seem to have put any tracker specific setting and thus, am confused why am I observing this.

Does bounding-box always exist? BBox is related with your model, sample model only work fine on sample streams.If the BBox is not exist,sgie can’t give a label. The output of tracker is the input of sgie.

Anyway, make sure pgie generates the bbox correctly.

Yes, the boxes appear even if I only run the primary model. Adding the tracker adds the object ID along with the object. If I keep the two and then add the second model, I see its information as well.
However, if I run the second model without the tracker, the boxes still appear identifying the object but don’t make the second inference. Since even just with the primary model, I can see bounding boxes, the issue should not appear after the removing the tracker right?

This description is incorrect.

1.Value of process-mode in dstest2_sgie1_config is 2.
It’s a secondary gie, the input is tensor not video frame. you can refer this document
Tracker can convert data of pgie into tensor. so tracker is necessary.

Secondary mode: Operates on objects added in the metadata by upstream components.

When the plugin is operating as a secondary classifier in async mode along with the tracker, it tries to improve performance by avoiding re-inferencing on the same objects in every frame. It does this by caching the classification output in a map with the object’s unique ID as the key. The object is inferred upon only when it is first seen in a frame (based on its object ID) or when the size (bounding box area) of the object increases by 20% or more. This optimization is possible only when the tracker is added as an upstream element.

2.If you modify value of process-mode to 1.Due to model limitations,it can’t work fine.

CarColor support tensor input only,no work for video frame.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

If you want run the sgie without the tracker,modify dstest2_sgie1_config.yml

classifier-async-mode: 1

It will be work,like this command line below

gst-launch-1.0 -e nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_pgie_config.yml ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie1_config.yml ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! nv3dsink filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_0

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.