Using DeepStream 7.0 on Ubuntu 20.04 for Multi-Camera Source Detection: Identifying the Source of Detected Objects in People and Face Detection, and Face Recognition
I am currently working with DeepStream 7.0 on Ubuntu 20.04 and utilizing the deepstream-app pipeline. My setup includes the following components:
PGIE (Primary GIE) for people detection.
SGIE0 for face detection.
SGIE1 for face recognition.
The pipeline is functioning well for detecting people and faces, as well as for performing face recognition. However, I am facing an issue when handling multiple camera sources. Specifically, the output does not contain information related to the source_id, pad_id, or camera-id to identify which camera each detection is associated with.
As I understand it, after the streammux component, the frame metadata no longer retains the source_id or pad_id. However, I need to match the detected outputs (people, faces, and recognized faces) with their respective camera sources (i.e., associating each detection with the correct camera).
Could you please suggest the best approach to retain or associate these identifiers in the detection and recognition process? Any guidance on how to preserve or match these identifiers in a multi-camera setup would be greatly appreciated.
I’m working with the deepstream-app and facing an issue related to retaining the source_id after the streammux step. I understand that each input source (such as a camera or video file) gets a unique source_id, but after the data passes through streammux, the source_id is always reset to 0, making it difficult to associate detection or classifier results with the correct source.
I’ve read a lot of posts about this issue, but I’m still unclear about the proper way to handle it. Here’s a summary of what I’ve tried and what I’ve understood so far:
Issue: The source_id is correctly assigned at the source level but gets reset to 0 after passing through streammux. This affects the ability to link detections and classifier outputs to their respective sources.
Possible Approaches I’ve Considered:
Using nvds_osd: I considered visualizing the source_id on the output, but this doesn’t resolve the issue of retaining the ID for downstream processing.
Custom Metadata Handling: I thought about modifying how metadata is handled across elements, but I’m not sure how to effectively pass the source_id through streammux.
Adjusting streammux Settings: I’m exploring whether there are any settings within streammux that can help propagate the source_id better, but haven’t had much luck yet.
Post-Processing Detection/Classification Results: Another option I’m considering is manually tracking the source_id during post-processing, but this feels like a workaround rather than a proper solution.
Can anyone share insights on how to properly retain the source_id after streammux? Any examples or suggestions would be highly appreciated.
In the analytics_done_buf_prob function, I can successfully get the source_id, but I only have the analytics output at this point, not the full inference output.
In the gie_processing_done_buf_prob function, I get source_id = 0, which is where I get the complete output of all inferences (Primary + Secondary).
I need to obtain the source_id in the gie_processing_done_buf_prob function, as that is when I have access to all my inference results. However, I am not sure how to retain or propagate the source_id from the earlier probe function (where I get the analytics output) so that I can use it when processing the full inference output later in the pipeline.
The NvDsBatchMeta is created by nvstreammux, before nvstreammux there is no NvDsBatchMeta and NvDsFrameMeta. Where and how did you get “source_id” before nvstreammux?
In the gie_processing_done_buf_prob function, I noticed that frame_meta->source_id is always set to 0. I would like to capture the source ID and other relevant outputs before the Tiler stage and after SGIE2 processing. Could you provide guidance on how to achieve this? I believe this approach will allow me to access the correct source_id and all associated outputs.
Hi,
How to add NVGSTDS_ELEM_ADD_PROBE on tiler src, so i can process all ouput from PGIE, SGIE1 and SGIE2 without losing frame_meta->source_id. in deepstream app it only have sink Probe for tiler that is what this function gie_processing_done_buf_prob is this right ?
The “gie_processing_done_buf_prob” is after nvmultistreamtiler which will combine the frames of the batch into one frame, so it is correct to get “source-id” 0 in gie_processing_done_buf_prob.
Since you already get the batched meta data in analytics_done_buf_prob, why did you need to get the batch meta again in gie_processing_done_buf_prob?
In analytics_done_buf_prob output does not have SGIE1 and SGIE2, i want to match face and i also want to know from which camera, that why, in deepstream-app.c have sink probe to tiler so i cant get camera_id for SGIE2 output.
The SGIE outputs are available in analytics_done_buf_prob too.
deepstream_app.c (72.8 KB)
You can refer to the attached deepstream_app.c file. Please replace /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app/deepstream_app.c with it and rebuild the deepstream-app sample app.
Then you can run the following command under /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app folder, we can get the SGIE1 and SGIE2 output(/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt configuration) in analytics_done_buf_prob in the log.
./../../sources/apps/sample_apps/deepstream-app/deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/streamscl
0:00:08.436696671 3597 0x559d5a482f80 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 5]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
0:00:08.671260478 3597 0x559d5a482f80 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 5]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
0:00:08.682507680 3597 0x559d5a482f80 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_secondary_vehiclemake.txt sucessfully
0:00:16.752997745 3597 0x559d5a482f80 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1
0:00:16.998987156 3597 0x559d5a482f80 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
0:00:17.002902762 3597 0x559d5a482f80 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:25.056920505 3597 0x559d5a482f80 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet18_trafficcamnet.etlt_b4_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60
0:00:25.300355272 3597 0x559d5a482f80 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet18_trafficcamnet.etlt_b4_gpu0_int8.engine
0:00:25.303567913 3597 0x559d5a482f80 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg)
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
** INFO: <bus_callback:291>: Pipeline ready
** INFO: <bus_callback:277>: Pipeline running
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
get classification for GIE 4
get classification for GIE 5
get classification for GIE 4
Thanks for reply, i tried but i didn’t get labels from SGIE2 classifier, when called from analytics_done_buf_prob but i get labels from gie_processing_done_buf_prob . maybe i think my process to get labels where wrong i try again and come back later.
thank you.