Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) 2080Ti
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) 470.182.03
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am trying to use low-level tracker library based on documentation (https://docs.nvidia.com/metropolis/deepstream/5.0/dev-guide/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html#wwpID0E0R40HA) and some sources for gst-nvtracker in DeepStream SDK 6.3
I brought some of the functions from the example sources and ran it but I encountered error at
NvMOT_Process as below:
tracker_block.cpp:88: virtual status_t TrackerBlock::updateObjects(Frame&): Assertion `external_containers_src_.size() == 1' failed.
I’ve never made the tracker running yet so I think I am missing something, but I am not sure what I am missing and hard to debug. Could you please take a look at my experimental code here
example.txt (10.5 KB)
and give some guidance to make this work?
Just for fun, I tried this example code: KLT NvMOT usage - #4 by pshin which they said working gives me the same error at
Can you share more details of why you don’t use DeepStream nvtracker plugin? Why you need use low-lever tracker library? If you refer the gst-nvtracker in DeepStream 6.3, you need run your test code on DeepStream 6.3 to keep align between test code and low-level library.
Hi @kesong! Thanks for your quick reply.
Yes. I am trying to use this tracker for ROS node, so the input is an image frame and the bbox information from the ROS topic, so I thought I had to use low-level library. I wanted to use the app as is, but I couldn’t find a clever way to combine the DeepStream library and the ROS node. Is there any way or example to do this?
I realized the library I was using was wrong. Right now I am using DeepStream 5.1, I included all tracker libraries to my program (
libnvds_nvdcf.so) and seems like each library shares the same function name
NvMOT_Process and it was conflicting somehow. When I included only
libnvds_mot_klt.so to my program, it was running properly with the code here (KLT NvMOT usage - #4 by pshin).
So here, I have two questions
With the KLT tracker running in the example code in the link, I realized the tracker output does not have any tracked object even though we give the input object bbox as 40,40,40,40. Do you have a hunch why this happens?
KLT tracker is running in this example but ultimately I want to use nvdcf tracker. However when I included
libnvds_nvdcf.so, and run the same code, it gives an error below:
OpenCV Error: Unknown error code -49 (Input file is empty) in cvOpenFileStorage, file /home/fangyul/Libraries/opencv-3.4.0/modules/core/src/persistence.cpp, line 4484
~~ CLOG[src/modules/NvDCF/NvDCF.cpp, NvDCF() @line 695]: !!![WARNING] Invalid low-level config file is provided. Will go ahead with default values
!![ERROR] No m_colorFormat specified
An exception occurred. No m_colorFormat specified
terminate called after throwing an instance of 'std::exception'
Can I know what should I change to use DCF tracker from the example in the link?
Your help would be really appreciated. Thank you!
Thanks for the suggestion! This looks good if I have just only camera streams as input since it’s coupled with gstream pipeline as original DeepStream app, so it wouldn’t be good for our case since our input is bounding box.
We are trying to detect traffic lights and track based on nvtracker from DeepStream library. We have our own detection model for the traffic lights and we have the bbox output from it, so that is why we are trying to use low-level tracker library.
You also can integrate your model into DeepStream plugin nvinfer. So you can use the pipeline: … ! nvstreammux ! nvinfer ! nvtracker ! …
That is good to know. I will try it. Thanks!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.