I am using Jetson Nx Xavier, with Jetpack 4.4.1. Deepstream 5.0.1, CUDA 10.2.
I am attempting to modify deepstream_test_3.py so that i add a second model(image classifier). Can i get a guide on how to do it please? All information i am find is about only object detection, but no classification. I managed to make a face recognition app based on python deepstream test 2 and is working fine so far. Now i want to apply that same process with multi stream input based on deepstream test 3. How can i go from there? This the code based on deepstream test 2; deepstream_test_2.py (16.9 KB)
Thank you for the reply. But this is for deepstream test 2. I want to change it so that i refer to deepstream test 3. But the problem is that in deepstream test 3; pipelines are linked like this streammux.link(queue1) queue1.link(pgie) pgie.link(queue2) queue2.link(tiler) tiler.link(queue3) queue3.link(nvvidconv) nvvidconv.link(queue4) queue4.link(nvosd) if is_aarch64(): nvosd.link(queue5) queue5.link(transform) transform.link(sink) else: nvosd.link(queue5) queue5.link(sink)
But in deepstream test 2, pipelines are linked like this srcpad.link(sinkpad) streammux.link(pgie) pgie.link(tracker) tracker.link(sgie1) sgie1.link(sgie2) sgie2.link(sgie3) sgie3.link(nvvidconv) nvvidconv.link(nvosd) if is_aarch64(): nvosd.link(transform) transform.link(sink) else: nvosd.link(sink)
Which is different, that is where i am stuck. How can i add the classifier pipeline according to deepstream test 3
The queues optimize performance. They allow for each element to push data downstream as soon as it’s done without waiting for the downstream element to be ready.
Test2 is just for demo, so there is no queue plugin, you just need to follow test2’s pipeline and implement it in test3
Which means i can do it like this?
streammux.link(queue1)
queue1.link(pgie)
pgie.link(sgie)
sgie.link(queue2)
queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
if is_aarch64():
nvosd.link(queue5)
queue5.link(transform)
transform.link(sink)
else:
nvosd.link(queue5) queue5.link(sink
This the edit deepstream_test_3. py following test_2. This the file;deepstream_test_3.py (19.1 KB) .But the app is only doing detection, but no classification. Where did i do it wrong?
I think you should make sure the model file can work in the deepstream_test2.py, it’s simple to verify it, then you can try that model on your test3 sample
The model works well with deepstream_test2.py. Face are being recognized if the faces data is available in dataset otherwise it is marked as unknown. When trying with deepstream_test3.py, only face detection part is done, even with 8 streams, but no classification,which means that the detection model(Yolov3 tiny) works well with test3 sample. Tried to modify deepstream_test3 for image classification part according to deepstream_test2.py but seems like it is not making any effect(no image classification. This was the result of edited deepstream_test3.py https://forums.developer.nvidia.com/uploads/short-url/kJZWN5WbKIr1KMvUPBEJBDJcsiY.py
I would suggest you try to go through the doc I shared, at least for the configs you are using in your config files, it’s helpful to read the doc and get more familiar with nvinfer and DS,
If you already confirm there is no error in your config, then you can add some log or breakpoint in gstnvinfer_meta_utils.cpp → attach_metadata_classifier to check if the function called properly