How to add detect model for future expanision

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version
Deepstream5.1
• JetPack Version (valid for Jetson only)
JetPack4.5.1
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
questions
**• I have deployed the current model for target-detection successfully. And I want to add more categories basing on additional target-detection model. So, how can I add these addtional models?? The addtional catefgories is not secondary, but just the new classes.
I’m not clear whether to use sgie or parallel multiple models.
Hope your suggestion.

Do you mean you want to use multiple detection models in one pipeline? If so, you can just use multiple PGIEs and assign different class id to different classes.

sources → streammux → PGIE1 → PGIE2 →

Yes, Now I try to use the py code. Is there any sample for me to check the detail?
I want to detect the stream by several detect models which are different categories, and I also can save the original image for later recheck, and combine the detect informations then send message to kafka service.

Will you save every frame of the stream?

All the functions you mentioned are in different samples, you can read them and try to combine the parts to one new application.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Python_Sample_Apps.html#id1

No, I want to save the frame under my frequent setting, such as save it when sending the message to kafka. So what I want are:

  1. add similar models for later expansion; -I want to know which would be suitable, to use pgie + sgie or to use pgie + pgie1 ?
  2. send message to kafka service, and save the images simultaneously。—I have tried to add the saving code from deepstream-imagedata-multistream.py to deepstream-test4.py, but found it cause error as follows:
    1

And my update code is also shown below:


Seems you do not put the code in the place where the video format is RGBA. Where did you put the code in?

Thanks for your reply. Now I hereinwith my code as below:

Also here is the officical code I copied from:

And, I have tried to run the original sample code, errors found:

Could you tell me the detail? I was really confused

Seems you are referring to the code here, right? deepstream_python_apps/deepstream_imagedata-multistream.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

If so, please pay attention to the function “tiler_sink_pad_buffer_probe(pad, info, u_data)”, it is the pad probe function in nvmultistreamtiler src pad which is after a nvvideoconvert (convert the video into RGBA format). So in this function, the video has already been converted into RGBA format, that is why we can use “cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)” to convert and copy the video data.

The code you post is just a part, I can not know it is in which function and how the function is working in the pipeline. Please check your code by yourself.

Yes, the code I referred was your mentioned before. I did add the code in the function of tiler_src_pad_buffer_probe,

And, pls note that, I checked the official code of deepstream-imagedata-multistream.py, and also got error which I have showned before.

Please upgrade to JetPack5.0.2 and DeepStream 6.1.1

To make it clear, I clarify the situation as follows:

  1. I tried to add code of saving images in deepstream-test4.py, and found errors called: Segmentation fault( core dumped).
  2. If I comment out the saving images codes, the deepstream-test4.py can be run successfully.
  3. I also run the officical code of deepstream-imagedata-multistream, and still found error: Internal datra stream error

Please upgrade JetPack and DeepStream to latest versions.

I have tried the official code of deepstream-imagedata-mul
tistream for Deepstream 6.1.1, but still found the same problem.
I pull the docker images of deepstream-6.1.1-samples, and install the plugins following the documents in deepstream_python_apps/README.md at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com).
And I also installed some python pacakages as requests, such as numpy, opencv-python, then I run the code by the command : python3 deepstream_imagedata-multistream.py rtsp://admin:a1234567@192.168.49.94/h264/ch1/sub/av_stream imgs
The error is shown as below:

B y the way, I also had check the code of deepstream_test1_rtsp_in_rtsp_out.py, it works…

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I’ve tried deepstream-imagedata-multistream with Deepstream 6.1.1 on Jetson. It works well.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.