The bounding box will only appear on nth frame when I set interval = n
, n!=0
.
How can I keep the bounding boxes on every frame?
The bounding box will only appear on nth frame when I set interval = n
, n!=0
.
How can I keep the bounding boxes on every frame?
If you set the interval
parameter, some frames will not be reasoned, so there will be no bbox information. If you want to keep the bounding boxes on every frame, please do not set that parameter.
Is there any alternative way to draw the bbox, such as call back function to implement the feature?
I hope the official can provide similar functions. It is very troublesome to implement it yourself.
My implementation method, the following is pseudo code, for reference only
class ObjMateCache:
border_width: int
text:str
...
class FrameMateCache:
source_id:int
objMateCaches:list[ObjMateCache]
...
last_frame_boxes = {}
def pgie_pad_buffer_probe(pad, info):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
if frame_meta.bInferDone: # if Inference frame
while l_frame is not None:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
current_obj = frame_meta.obj_meta_list
frame_obj = FrameMateCache()
frame_obj.source_id = frame_meta.source_id
...
while current_obj is not None:
# save your bbox objects
bbox_obj = ObjMateCache()
bbox_obj.border_width = current_obj.border_width
frame_obj.objMateCaches.append(bbox_obj)
...
# save your frame objects
last_frame_boxes[frame_meta.source_id]= frame_obj # You need to implement a structure like
else: # It's not an inference frame
frame_cache = last_frame_boxes[frame_meta.source_id]
for bbox_obj in frame_cache.objMateCaches:
obj_meta = pyds.nvds_acquire_obj_meta_from_pool(batch_meta)
obj_meta.rect_params.border_width = bbox_obj.border_width
...
pyds.nvds_add_obj_meta_to_frame(frame_meta, obj_meta, None)
We can support the method of adding the display meta to the frame to draw the bbox currently. You can refer to the display_meta in our code and just modify the display_meta.rect_params[0]
.
Or you can just add a tracker plugin after the pgie. The tracker also adds the bbox itself.
Is there any demo code avaliable for reference?
Could you attach your whole pipeline here? In theory, all you need to do is add a tracker plugin after the pgie plugin.
I have deepstream-YOLO configurations:
A: jetson-fpv/utils/dsyolo/yolov8n_infer_primary.txt at main · SnapDragonfly/jetson-fpv · GitHub
==> If I set interval = n
, n!=0
. Then bounding box will only appear on nth frame.
==> If I set interval = n
, n!=0
. Then bounding box will also only appear on nth frame.
Any suggestions?
You can try to add a tracker
in your source_config_yolov8n.txt file.
I think I’m close to the objective when nvDCF is enabled, the bounding boxes are always there.
But … … I got below output and the FPS is low.
**PERF: 22.51 (22.27)
gstnvtracker: Unable to acquire a user meta buffer. Try increasing user-meta-pool-size
**PERF: 42.01 (41.23)
Here is the configuration: jetson-fpv/utils/dsyolo/source_config_yolov8n_nvDCF.txt at main · SnapDragonfly/jetson-fpv · GitHub
Please help! Any idea about how to improve performance? I need to receive 1080P@60FPS streaming video for object tracking.
EDIT: If I want to track only three objects in labels.txt, how to ignore other objects?
EDIT2: I tried deepstream nvDCF tracker, it doesn’t always have bounding boxes.
If you add a tracker, this will inevitably slow down the performance.
If you want to keep the bbox without slowing down the performance. You can only record the coordinates of the bbox yourself and then add them to the metadata. You can refer to deepstream_imagedata-multistream_redaction.py to learn how to draw the bbox on the image.
interval = 5
in DeepStream with nvDCF cause the bounding boxes to flicker, while in the DeepStream YOLO framework, they do not?interval = 5
in the DeepStream YOLO framework’s configuration file, it will not affect the behavior of drawing bounding boxes on every frame?You can try that with just detecting three classes of objects. But as I attached before, the tracker will inevitably slow down the performance.
I have tried with our deepstream-test2
sample, there are no flicker issue. Please try it out using the same configuration file.
Yes. You can draw anything on the image yourself without being affected by the nvinfer.
Which DS version are you using? I’m using DS6.3.
I’m running the sample with DS 7.1 on A40.
Thanks. I’ll try test-2 sample with same configuration on DS6.3. And get back to you later.
BTW, I found the code runs ok on Jetpack 5, but can’t work on 6.2.
I have found that my DS6.3 program can’t work on Jetpack6.2, which is OK on Jetpack5 Is there any changes need for DS6.3 upgrade to DS7.1?
It seems OK with DS7.1.
BTW, Is it OK with DS6.3?
Yes. But we recommend you use the latest version.
OK, I’m upgrading the system to 6.2, but there is a quite a bit of issues when upgrade from Jepack 5.1.4.
I used to use deepstream-app -c source_config_yolov8n.txt to get rtp video source in deepstream 6.3/jetpack5.1.4. But I can’t get it working with this command + config for deepstream-app 7.1? ==> It’s black screen now. Why? Any thing I should change? [application] enable-perf-measurement=1 perf-measurement-interval-sec=5 [tiled-display] enable=1 rows=1 columns=1 width=1920 height=1080 gpu-id=0 #nvbuf-memory-type #(0): nvbuf-mem-default - Default memory allocated, specific to particular plat…
Any ideas? version commit c038530ebf718e6867c4458c3e439406020732ff (HEAD -> master, origin/master, origin/HEAD) Author: Dustin Franklin <dustinf@nvidia.com> Date: Wed Oct 16 06:56:03 2024 -0400 updates for TRT10 ------ Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi Model: NVIDIA Jetson Orin Nano Developer Kit - Jetpack 6.2 [L4T 36.4.3] NV Power Mode[0]: 15W Serial Number: [XXX Show with: jetson_release -s XXX] Hardware: - P-Number: p3767-0005 - Module: NVIDIA Jets…
I have found that my DS6.3 program can’t work on Jetpack6.2, which is OK on Jetpack5 Is there any changes need for DS6.3 upgrade to DS7.1?
If you want to use DeepStream 7.1, we recommend Jetpack 6.1. Please refer to our Jetson model Platform and OS Compatibility to install the corresponding version.