Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : Jetson Nano
• DeepStream Version : 6.0
• JetPack Version (valid for Jetson only) : 4.6
• TensorRT Version : 8.2
• NVIDIA GPU Driver Version (valid for GPU only) :
• Issue Type( questions, new requirements, bugs) : question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I want to display the confidence value of the detected object. Current label only showing with bbox. Expected: Label + confidence
Is there any implemented example for this NvDsInfer API to refer?
NvDsInfer — Deepstream Deepstream Version: 6.1 documentation
How can i print the FPS and inference timing in the output rendered video? Currently frame number only printing.
Is there any way to print or save raw metadata from gst_buffer? I tried,
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
print(’\n-----printing frame meta-----’)
It is printing only pointers actually…
Hi @soundarrajan , you can write anything or draw anything you want by the NvDsDisplayMeta struct.
You can refer deepstream_test1_app.c
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
First, get the data you wanted, and set it to the “display_meta”.
I think you are not getting what i’m trying to say…
1) Expected output:
I can see Python API to get detectionConfidence, NvDsInferObjectDetectionInfo — Deepstream Deepstream Version: 6.1 documentation but i coudn’t see any sample implementation to refer.
2) FPS should be something like below image (FPS is printing in red color)
3) Metadata save
We are saving all data in GST_BUFFER and then probe in callback right?
I want to save the raw metadata whichever in the GST_BUFFER.
Any updates on my queries?
Please let me know if any inputs are details required from my end…
Hi, @soundarrajan Do you run our demo from the link below or your own code?
The first one, Could you check from the python demo from the below link(cluster_and_fill_detection_output_nms):
The second one:
We have no the FPS and inference timing data statics now. In theory, you can draw any data that you get in the picture.
The third one:
You can refer https://forums.developer.nvidia.com/t/access-frame-pointer-in-deepstream-app/79838#5375214.
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.