Deepstream Test3

I am using Jetson Nx Xavier, with Jetpack 4.4.1. Deepstream 5.0.1, CUDA 10.2.

I am attempting to modify deepstream_test_3.py so that i add a second model(image classifier). Can i get a guide on how to do it please? All information i am find is about only object detection, but no classification. I managed to make a face recognition app based on python deepstream test 2 and is working fine so far. Now i want to apply that same process with multi stream input based on deepstream test 3. How can i go from there? This the code based on deepstream test 2;
deepstream_test_2.py (16.9 KB)

Hey, I think you can refer deepstream_python_apps/deepstream_test_2.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub , it provide a pipelien as …pgie->sgie1->sgie2->sgie3…

1 Like

Thank you for the reply. But this is for deepstream test 2. I want to change it so that i refer to deepstream test 3. But the problem is that in deepstream test 3; pipelines are linked like this streammux.link(queue1) queue1.link(pgie) pgie.link(queue2) queue2.link(tiler) tiler.link(queue3) queue3.link(nvvidconv) nvvidconv.link(queue4) queue4.link(nvosd) if is_aarch64(): nvosd.link(queue5) queue5.link(transform) transform.link(sink) else: nvosd.link(queue5) queue5.link(sink)

But in deepstream test 2, pipelines are linked like this srcpad.link(sinkpad) streammux.link(pgie) pgie.link(tracker) tracker.link(sgie1) sgie1.link(sgie2) sgie2.link(sgie3) sgie3.link(nvvidconv) nvvidconv.link(nvosd) if is_aarch64(): nvosd.link(transform) transform.link(sink) else: nvosd.link(sink)

Which is different, that is where i am stuck. How can i add the classifier pipeline according to deepstream test 3

The queues optimize performance. They allow for each element to push data downstream as soon as it’s done without waiting for the downstream element to be ready.
Test2 is just for demo, so there is no queue plugin, you just need to follow test2’s pipeline and implement it in test3

Which means i can do it like this?
streammux.link(queue1)
queue1.link(pgie)
pgie.link(sgie)
sgie.link(queue2)
queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
if is_aarch64():
nvosd.link(queue5)
queue5.link(transform)
transform.link(sink)
else:
nvosd.link(queue5) queue5.link(sink

Yeah, you can also add a tracker between pgie and sgie.

This the edit deepstream_test_3. py following test_2. This the file;deepstream_test_3.py (19.1 KB) .But the app is only doing detection, but no classification. Where did i do it wrong?

could you share your sgie config files?

classifier_config.txt (3.4 KB)

These 2 files below are for a working face recognition app based on deepstream test 2:
deepstream_test_2.py (16.8 KB)
facenet_utils.py (4.4 KB)

I think you should make sure the model file can work in the deepstream_test2.py, it’s simple to verify it, then you can try that model on your test3 sample

Following Gst-nvinfer — DeepStream 6.1.1 Release documentation for the config file,
network-type=100 is not correct for classifier model

You mean onnx model?

You can check the document I shared, there are detailed info for the config items, it should not be a hard thing to do that.

The model works well with deepstream_test2.py. Face are being recognized if the faces data is available in dataset otherwise it is marked as unknown. When trying with deepstream_test3.py, only face detection part is done, even with 8 streams, but no classification,which means that the detection model(Yolov3 tiny) works well with test3 sample. Tried to modify deepstream_test3 for image classification part according to deepstream_test2.py but seems like it is not making any effect(no image classification. This was the result of edited deepstream_test3.py https://forums.developer.nvidia.com/uploads/short-url/kJZWN5WbKIr1KMvUPBEJBDJcsiY.py

Hey, customer

I had pointed the network-type=100 is not correct, you should set it to 1 if you really checked the doc, all these info are documented.

I would suggest you try to go through the doc I shared, at least for the configs you are using in your config files, it’s helpful to read the doc and get more familiar with nvinfer and DS,

1 Like

Yes i changed it to 1 already. But still facing the same problem for test 3. On test 2, there is no problem at all.

If you already confirm there is no error in your config, then you can add some log or breakpoint in gstnvinfer_meta_utils.cpp → attach_metadata_classifier to check if the function called properly