Multiple video stream input to SSD Mobilenet V2 TensorRT engine using Deepstream

Hi everyone,

I have built TensorRT engine with extension .bin of my custom trained SSD Mobilenet V2 object detection model on Jetson Nano. Now I have to give 2 or more input streams from cameras to it to detect objects. I think Deepstream will be great option.

How can I do this using Deepstream?
I searched for lot of articles, documentation online but they seem to be using either pretrained model, not custom trained or model other than SSD Mobilenet V2. Any assistance would be greatly appreciated.

  • Device: Jetson Nano
  • TensorRT version: 7.1
  • Deepstream version: 5

Thank You.

Hi,

We do this with Deepstream-4.0.2 before. The procedure should be similar to Deepstream-5.0.
You can find the detail steps in this comment:

Thanks.

Thank You @AastaLLL,

I tried out your suggestions, it worked for pretrained network downloaded from this link:
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

But I got ERROR: [TRT]: UffParser: Validator error: Cast: Unsupported operation _Cast, when I tried to use my custom trained model.

I managed to solve this using following Github Repo:

I used generated uff from above repo. along with your suggested modifications for config file “config_infer_primary_ssd.txt”. And it worked.

Now my question is, How can I access those predictions from model in my Python code to do some task?

Thank you.

Hi,

You can add Cast to map function to solve the error:
config.py

namespace_plugin_map = {
    ...
    "Cast": Input,
    ...
}

The prediction is stored in the output buffer.
You can access it directly with the python interface.

Thanks

This worked when I put number of classes +1 in config.py during conversion to uff and also in deepstream config file. But it throws an error when I put actual number of classes. My model is trained for 3 classes. All works fine, when I put classes =4 but didn’t work when I put classes=3. Why this is happening?

The error I got during uff to engine conversion when classes=3 is:
#assertionnmsPlugin.cpp,249
Aborted (core dumped)

I’ll check it out.

Thank you.

Hi,
I am using 2 usb cam streams as a input to a deepstream, but one camera stream lags behind the other one by some delay. Why is this happening?

Thank you.

Hi,

This is related to the model implementation.
By default you will need to reserve one class for background type.

For the camera input, would you mind to open another issue for the latency issue?
Thanks.

Hi @AastaLLL,

Ok, Thank you very much for help.

I’ll do it.