Cannot introduce a delay in a deepstream pipeline - pipeline stalls when adding "min-threshold-buffers=5" to a queue

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Xavier NX
• DeepStream Version: 6.3
• JetPack Version (valid for Jetson only): 5.1.3
• TensorRT Version: 8.5.2.2-1+cuda11.4
• Issue Type( questions, new requirements, bugs): question / bug

I build a custom deepstream app based on deepstream-lpr-app.
I stream a live video from a camera and need to delay it for some time (about 0.2 seconds or about 5 frames) before passing it to nvosd.
To implement the delay I added min-threshold-buffers=5 property to a queue before the nvosd.

However, when I run the app, the pipeline stalls.
Here is the output:

artem@ubuntu:~/Projects/deepstream_lpr_app/deepstream-lpr-app$ sudo GST_DEBUG=3 ./deepstream-lpr-app 1 4 0 infer ../../../Downloads/output30fps.mp4 cam_output
./deepstream-lpr-app: /lib/aarch64-linux-gnu/libjansson.so.4: no version information available (required by ./deepstream-lpr-app)
use_nvinfer_server:0, use_triton_grpc:0
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
NppStatus: 0
nBufferSize: 14400
Now playing: 1
Opening in BLOCKING MODE 
0:00:00.685327580  4844 0xaaaac6e94530 WARN                    alsa control.c:1379:snd_ctl_open_noupdate: alsalib error: Invalid CTL UMC1820_2

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:00.782614768  4844 0xaaaac6e94530 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaaac6f6e0e0 Failed to determine interlace mode
0:00:00.782765747  4844 0xaaaac6e94530 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaaac6f6e0e0 Failed to determine interlace mode
0:00:00.782860340  4844 0xaaaac6e94530 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaaac6f6e0e0 Failed to determine interlace mode
0:00:00.782950678  4844 0xaaaac6e94530 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaaac6f6e0e0 Failed to determine interlace mode
0:00:00.783110297  4844 0xaaaac6e94530 WARN                    v4l2 gstv4l2object.c:4561:gst_v4l2_object_probe_caps:<nvvideo-h264enc:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:00.785040765  4844 0xaaaac6e94530 WARN                    alsa pcm_hw.c:1715:snd_pcm_hw_open: alsalib error: open '/dev/snd/pcmC0D0c' failed (-77): File descriptor in bad state
0:00:00.855187863  4844 0xaaaac66b2060 FIXME                default gstutils.c:3980:gst_pad_create_stream_id_internal:<alsasrc:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.412536451  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 3]: deserialized trt engine from :/home/artem/Projects/deepstream_lpr_app/models/LP/LPR/kz_lprnet_baseline18_b16_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

0:00:06.499445715  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 3]: Use deserialized engine model: /home/artem/Projects/deepstream_lpr_app/models/LP/LPR/kz_lprnet_baseline18_b16_fp16.engine
0:00:06.551272758  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_kz.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:09.790131171  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :/home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x480x640       min: 1x3x480x640     opt: 16x3x480x640    Max: 16x3x480x640    
1   OUTPUT kINT32 BatchedNMS      1               min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT BatchedNMS_2    200             min: 0               opt: 0               Max: 0               
4   OUTPUT kFLOAT BatchedNMS_3    200             min: 0               opt: 0               Max: 0               

0:00:09.872040307  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 2]: Use deserialized engine model: /home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine
0:00:09.950204350  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_yolov4-tiny_us.txt sucessfully
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:11.319139314  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:11.399304353  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine
0:00:11.421710913  4844 0xaaaac6e94530 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
0:00:11.424676344  4844 0xaaaac6e94530 WARN                 v4l2src gstv4l2src.c:695:gst_v4l2src_query:<source> Can't give latency since framerate isn't fixated !
Running...
NvMMLiteOpen : Block : BlockType = 4 
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
0:00:11.563041374  4844 0xaaaac66b2120 WARN          v4l2bufferpool gstv4l2bufferpool.c:1114:gst_v4l2_buffer_pool_start:<nvvideo-h264enc:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:11.692266714  4844 0xaaaad5603700 WARN                 v4l2src gstv4l2src.c:914:gst_v4l2src_create:<source> Timestamp does not correlate with any clock, ignoring driver timestamps
Frame number = 0 mean intensity = 28.412987 light_intensity = 13.663758 gain = 7 exposure = 8 
H264: Profile = 100, Level = 0 
NVMEDIA: Need to set EMC bandwidth : 5744000 
NVMEDIA: Need to set EMC bandwidth : 5744000 
NvVideo: bBlitMode is set to TRUE 
0:00:13.962572989  4844 0xffff1001c6a0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1565:gst_v4l2_buffer_pool_dqbuf:<nvvideo-h264enc:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:13.966456389  4844 0xffff1001c6a0 FIXME               basesink gstbasesink.c:3246:gst_base_sink_default_event:<nvvideo-renderer2> stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:13.967469496  4844 0xffff1001c6a0 WARN                   qtmux gstqtmux.c:2981:gst_qt_mux_start_file:<qtmux> Robust muxing requires reserved-moov-update-period to be set
0:00:14.058342796  4844 0xaaaac66b2180 WARN                basesink gstbasesink.c:3003:gst_base_sink_is_too_late:<nvvideo-renderer> warning: A lot of buffers are being dropped.
0:00:14.058513647  4844 0xaaaac66b2180 WARN                basesink gstbasesink.c:3003:gst_base_sink_is_too_late:<nvvideo-renderer> warning: There may be a timestamping problem, or this computer is too slow.

Here is my pipeline:

When I set the min-threshold-buffers to a lower value (e.g.3) - the app works, but drops some buffers and writes these warnings:

0:00:46.807550191 6063 0xaaab0d594580 WARN basesink gstbasesink.c:3003:gst_base_sink_is_too_late: warning: A lot of buffers are being dropped.
0:00:46.807675057 6063 0xaaab0d594580 WARN basesink gstbasesink.c:3003:gst_base_sink_is_too_late: warning: There may be a timestamping problem, or this computer is too slow.

How to fix the pipeline so it’d delay frames before nvosd without stalling or dropping frames?

Why increase latency? It doesn’t seem to have any real benefit.

Also, dropped frames are usually caused by overload and poor performance, so the pipeline must drop frames to continue running.

Can you check the CPU/GPU usage first?

I use a set of microphones and overlay data from them onto the video using NVOSD. Since sound propagates more slowly than light, audio data arrives with a delay compared to the video. Therefore, I need to delay the video by a few frames to overlay the audio data onto the frame that corresponds to the moment the sound was actually emitted.

Concerning CPU/GPU usage, here is what tegrastats returns before and after launching the app:

artem@ubuntu:~$ sudo tegrastats
07-30-2025 18:02:21 RAM 3280/6833MB (lfb 31x4MB) SWAP 0/3417MB (cached 0MB) CPU [2%@1422,2%@1420,10%@1420,4%@1420,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 3%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@53.5C thermal@53.7C AO@54C GPU@53.5C iwlwifi@52C PMIC@50C VDD_IN 4606mW/4606mW VDD_CPU_GPU_CV 529mW/529mW VDD_SOC 1259mW/1259mW
07-30-2025 18:02:22 RAM 3280/6833MB (lfb 31x4MB) SWAP 0/3417MB (cached 0MB) CPU [8%@1183,15%@1190,30%@1186,17%@1190,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@53.5C thermal@53.7C AO@54C GPU@53.5C iwlwifi@52C PMIC@50C VDD_IN 4721mW/4663mW VDD_CPU_GPU_CV 569mW/549mW VDD_SOC 1300mW/1279mW
07-30-2025 18:02:23 RAM 3280/6833MB (lfb 31x4MB) SWAP 0/3417MB (cached 0MB) CPU [6%@1190,4%@1190,1%@1190,0%@1190,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@53.5C thermal@53.3C AO@54C GPU@53.5C iwlwifi@52C PMIC@50C VDD_IN 4484mW/4603mW VDD_CPU_GPU_CV 407mW/501mW VDD_SOC 1261mW/1273mW
07-30-2025 18:02:24 RAM 3280/6833MB (lfb 31x4MB) SWAP 0/3417MB (cached 0MB) CPU [4%@1190,1%@1191,1%@1189,2%@1188,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@53.5C thermal@53.7C AO@54C GPU@53.5C iwlwifi@56C PMIC@50C VDD_IN 4443mW/4563mW VDD_CPU_GPU_CV 366mW/467mW VDD_SOC 1261mW/1270mW
07-30-2025 18:02:25 RAM 3340/6833MB (lfb 31x4MB) SWAP 0/3417MB (cached 0MB) CPU [23%@1420,67%@1419,5%@1426,4%@1416,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 0%@[306] NVENC 499 VIC_FREQ 115 APE 150 AUX@54.5C CPU@54C thermal@53.7C AO@54C GPU@53.5C iwlwifi@54C PMIC@50C VDD_IN 5128mW/4676mW VDD_CPU_GPU_CV 936mW/561mW VDD_SOC 1341mW/1284mW
07-30-2025 18:02:26 RAM 3388/6833MB (lfb 31x4MB) SWAP 0/3417MB (cached 0MB) CPU [15%@1420,39%@1419,3%@1421,4%@1420,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 6%@[306] VIC_FREQ 0%@115 APE 150 AUX@54C CPU@54C thermal@54.2C AO@54C GPU@53.5C iwlwifi@55C PMIC@50C VDD_IN 5006mW/4731mW VDD_CPU_GPU_CV 814mW/603mW VDD_SOC 1341mW/1293mW
07-30-2025 18:02:27 RAM 3538/6833MB (lfb 22x4MB) SWAP 0/3417MB (cached 0MB) CPU [9%@1420,97%@1420,2%@1420,5%@1420,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@54C AO@54C GPU@53.5C iwlwifi@55C PMIC@50C VDD_IN 5128mW/4788mW VDD_CPU_GPU_CV 936mW/651mW VDD_SOC 1300mW/1294mW
07-30-2025 18:02:28 RAM 3561/6833MB (lfb 22x4MB) SWAP 0/3417MB (cached 0MB) CPU [7%@1420,98%@1420,1%@1420,2%@1421,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 19%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@53.85C AO@54C GPU@54C iwlwifi@55C PMIC@50C VDD_IN 5088mW/4825mW VDD_CPU_GPU_CV 936mW/686mW VDD_SOC 1300mW/1295mW
07-30-2025 18:02:29 RAM 3660/6833MB (lfb 26x4MB) SWAP 0/3417MB (cached 0MB) CPU [8%@1417,98%@1420,10%@1418,12%@1419,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 3%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@54C AO@54.5C GPU@53.5C iwlwifi@54C PMIC@50C VDD_IN 5413mW/4890mW VDD_CPU_GPU_CV 1178mW/741mW VDD_SOC 1341mW/1300mW
07-30-2025 18:02:30 RAM 3811/6833MB (lfb 17x4MB) SWAP 0/3417MB (cached 0MB) CPU [7%@1421,99%@1420,2%@1420,8%@1420,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 3%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54.5C thermal@53.85C AO@54C GPU@53.5C iwlwifi@53C PMIC@50C VDD_IN 5210mW/4922mW VDD_CPU_GPU_CV 976mW/764mW VDD_SOC 1341mW/1304mW
07-30-2025 18:02:31 RAM 3849/6833MB (lfb 12x4MB) SWAP 0/3417MB (cached 0MB) CPU [9%@1417,96%@1423,4%@1420,4%@1420,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 0%@115 APE 150 AUX@54.5C CPU@54.5C thermal@53.85C AO@54C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 5291mW/4956mW VDD_CPU_GPU_CV 1099mW/795mW VDD_SOC 1341mW/1307mW
07-30-2025 18:02:32 RAM 3850/6833MB (lfb 12x4MB) SWAP 0/3417MB (cached 0MB) CPU [8%@1415,87%@1421,1%@1420,11%@1418,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 1%@[204] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@54C AO@54C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 5210mW/4977mW VDD_CPU_GPU_CV 1017mW/813mW VDD_SOC 1300mW/1307mW
07-30-2025 18:02:33 RAM 3935/6833MB (lfb 8x4MB) SWAP 0/3417MB (cached 0MB) CPU [10%@1422,3%@1420,50%@1408,48%@1419,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 1%@[204] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@53.85C AO@54.5C GPU@53.5C iwlwifi@52C PMIC@50C VDD_IN 5128mW/4988mW VDD_CPU_GPU_CV 936mW/823mW VDD_SOC 1341mW/1309mW
07-30-2025 18:02:34 RAM 4192/6833MB (lfb 13x1MB) SWAP 0/3417MB (cached 0MB) CPU [9%@1420,6%@1420,96%@1420,13%@1420,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 39%@[204] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@53.85C AO@54.5C GPU@53.5C iwlwifi@52C PMIC@50C VDD_IN 5332mW/5013mW VDD_CPU_GPU_CV 1017mW/836mW VDD_SOC 1341mW/1312mW
07-30-2025 18:02:35 RAM 4280/6833MB (lfb 12x1MB) SWAP 0/3417MB (cached 0MB) CPU [9%@1421,6%@1421,99%@1420,5%@1420,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 1%@[306] VIC_FREQ 115 APE 150 AUX@54C CPU@54C thermal@54.2C AO@54.5C GPU@54C iwlwifi@56C PMIC@50C VDD_IN 5169mW/5023mW VDD_CPU_GPU_CV 976mW/846mW VDD_SOC 1300mW/1311mW
07-30-2025 18:02:36 RAM 4443/6833MB (lfb 3x1MB) SWAP 0/3417MB (cached 0MB) CPU [29%@1420,36%@1419,74%@1419,22%@1419,off,off] EMC_FREQ 2%@1600 GR3D_FREQ 5%@[306] NVENC 499 NVENC1 499 VIC_FREQ 0%@115 APE 150 AUX@54.5C CPU@54.5C thermal@54.2C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 5779mW/5071mW VDD_CPU_GPU_CV 1341mW/877mW VDD_SOC 1503mW/1323mW
07-30-2025 18:02:37 RAM 4494/6833MB (lfb 1x1MB) SWAP 0/3417MB (cached 0MB) CPU [76%@1421,31%@1419,53%@1420,54%@1418,off,off] EMC_FREQ 2%@1600 GR3D_FREQ 0%@[408] VIC_FREQ 0%@115 APE 150 AUX@54.5C CPU@54.5C thermal@54.2C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 5902mW/5119mW VDD_CPU_GPU_CV 1422mW/909mW VDD_SOC 1503mW/1333mW
07-30-2025 18:02:38 RAM 4533/6833MB (lfb 1x1MB) SWAP 1/3417MB (cached 0MB) CPU [72%@1419,12%@1420,29%@1420,35%@1421,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[408] VIC_FREQ 10%@115 APE 150 AUX@54.5C CPU@54.5C thermal@54.2C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 5576mW/5145mW VDD_CPU_GPU_CV 1178mW/924mW VDD_SOC 1463mW/1340mW
07-30-2025 18:02:39 RAM 4604/6833MB (lfb 4x256kB) SWAP 1/3417MB (cached 0MB) CPU [70%@1418,67%@1419,83%@1421,77%@1419,off,off] EMC_FREQ 3%@1600 GR3D_FREQ 6%@[408] NVENC 499 VIC_FREQ 0%@115 APE 150 AUX@54.5C CPU@55C thermal@54.35C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 6583mW/5220mW VDD_CPU_GPU_CV 1869mW/973mW VDD_SOC 1663mW/1357mW
07-30-2025 18:02:40 RAM 4625/6833MB (lfb 1x512kB) SWAP 1/3417MB (cached 0MB) CPU [84%@1420,88%@1420,79%@1419,90%@1419,off,off] EMC_FREQ 3%@1600 GR3D_FREQ 0%@[408] VIC_FREQ 115 APE 150 AUX@55C CPU@55C thermal@54.5C AO@54.5C GPU@54C iwlwifi@55C PMIC@50C VDD_IN 6583mW/5289mW VDD_CPU_GPU_CV 1950mW/1022mW VDD_SOC 1582mW/1369mW
07-30-2025 18:02:41 RAM 4625/6833MB (lfb 1x512kB) SWAP 1/3417MB (cached 0MB) CPU [83%@1420,92%@1420,88%@1419,81%@1420,off,off] EMC_FREQ 3%@1600 GR3D_FREQ 24%@[408] VIC_FREQ 115 APE 150 AUX@54.5C CPU@55C thermal@54.35C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 6502mW/5346mW VDD_CPU_GPU_CV 1910mW/1064mW VDD_SOC 1541mW/1377mW
07-30-2025 18:02:42 RAM 4639/6833MB (lfb 1x256kB) SWAP 1/3417MB (cached 0MB) CPU [41%@1421,30%@1420,30%@1420,24%@1421,off,off] EMC_FREQ 3%@1600 GR3D_FREQ 3%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54.5C thermal@54.5C AO@54.5C GPU@54C iwlwifi@58C PMIC@50C VDD_IN 5372mW/5347mW VDD_CPU_GPU_CV 976mW/1060mW VDD_SOC 1503mW/1383mW
07-30-2025 18:02:43 RAM 4597/6833MB (lfb 1x512kB) SWAP 1/3417MB (cached 0MB) CPU [9%@1190,17%@1188,5%@1266,11%@1267,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54.5C thermal@54.2C AO@54C GPU@54C iwlwifi@53C PMIC@50C VDD_IN 4884mW/5327mW VDD_CPU_GPU_CV 569mW/1039mW VDD_SOC 1422mW/1384mW
07-30-2025 18:02:44 RAM 4598/6833MB (lfb 3x512kB) SWAP 1/3417MB (cached 0MB) CPU [26%@1190,19%@1191,23%@1192,30%@1189,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54C thermal@54.35C AO@54.5C GPU@53.5C iwlwifi@53C PMIC@50C VDD_IN 5088mW/5317mW VDD_CPU_GPU_CV 773mW/1028mW VDD_SOC 1463mW/1387mW
07-30-2025 18:02:45 RAM 4595/6833MB (lfb 3x512kB) SWAP 1/3417MB (cached 0MB) CPU [10%@1189,13%@1190,6%@1190,5%@1190,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 6%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54C thermal@54.05C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 4843mW/5298mW VDD_CPU_GPU_CV 569mW/1009mW VDD_SOC 1422mW/1389mW
07-30-2025 18:02:46 RAM 4591/6833MB (lfb 1x2MB) SWAP 2/3417MB (cached 0MB) CPU [25%@1420,13%@1420,19%@1420,7%@1420,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 3%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54.5C thermal@54.05C AO@54.5C GPU@54C iwlwifi@56C PMIC@50C VDD_IN 5006mW/5287mW VDD_CPU_GPU_CV 691mW/997mW VDD_SOC 1463mW/1392mW
07-30-2025 18:02:48 RAM 4596/6833MB (lfb 1x2MB) SWAP 5/3417MB (cached 0MB) CPU [19%@1342,19%@1343,25%@1343,20%@1343,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54.5C thermal@54.2C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 5088mW/5280mW VDD_CPU_GPU_CV 732mW/987mW VDD_SOC 1463mW/1394mW
07-30-2025 18:02:49 RAM 4596/6833MB (lfb 1x2MB) SWAP 5/3417MB (cached 0MB) CPU [20%@1190,17%@1190,19%@1189,16%@1190,off,off] EMC_FREQ 1%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54C thermal@54.05C AO@54.5C GPU@53.5C iwlwifi@52C PMIC@50C VDD_IN 4965mW/5268mW VDD_CPU_GPU_CV 651mW/975mW VDD_SOC 1422mW/1395mW
07-30-2025 18:02:50 RAM 4597/6833MB (lfb 1x2MB) SWAP 5/3417MB (cached 0MB) CPU [8%@1188,3%@1190,0%@1190,3%@1190,off,off] EMC_FREQ 0%@1600 GR3D_FREQ 0%@[306] VIC_FREQ 115 APE 150 AUX@54.5C CPU@54C thermal@54.05C AO@54.5C GPU@54C iwlwifi@52C PMIC@50C VDD_IN 4680mW/5248mW VDD_CPU_GPU_CV 447mW/957mW VDD_SOC 1422mW/1396mW

I am a beginner at deepstream development, so I don’t know how to interpret it correctly, but it doesn’t seem to me that CPU and GPU are overloaded

This is probably not the problem, it is usually caused by wrong PTS.

This may be due to the timestamp. Try setting the filesink sync property to false.

I’ve set the filesink property to false and got another error: “Timestamp does not correlate with any clock, ignoring driver timestamps”. The pipeline stalled as well.

I’ve done a quick search but didn’t figure out how to fix this.

Here is the full output:

artem@ubuntu:~/Projects/deepstream_lpr_app/deepstream-lpr-app$ sudo GST_DEBUG=3 ./deepstream-lpr-app 1 4 0 infer ../../../Downloads/output30fps.mp4 cam_output
./deepstream-lpr-app: /lib/aarch64-linux-gnu/libjansson.so.4: no version information available (required by ./deepstream-lpr-app)
use_nvinfer_server:0, use_triton_grpc:0
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
NppStatus: 0
nBufferSize: 14400
Now playing: 1
Opening in BLOCKING MODE 
0:00:00.668879771  5517 0xaaab09ae7330 WARN                    alsa control.c:1379:snd_ctl_open_noupdate: alsalib error: Invalid CTL UMC1820_2

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:00.762041218  5517 0xaaab09ae7330 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaab09bc13f0 Failed to determine interlace mode
0:00:00.762189028  5517 0xaaab09ae7330 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaab09bc13f0 Failed to determine interlace mode
0:00:00.762297126  5517 0xaaab09ae7330 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaab09bc13f0 Failed to determine interlace mode
0:00:00.762415240  5517 0xaaab09ae7330 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xaaab09bc13f0 Failed to determine interlace mode
0:00:00.762608458  5517 0xaaab09ae7330 WARN                    v4l2 gstv4l2object.c:4561:gst_v4l2_object_probe_caps:<nvvideo-h264enc:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:00.764570341  5517 0xaaab09ae7330 WARN                    alsa pcm_hw.c:1715:snd_pcm_hw_open: alsalib error: open '/dev/snd/pcmC0D0c' failed (-77): File descriptor in bad state
0:00:00.813326279  5517 0xaaab09305060 FIXME                default gstutils.c:3980:gst_pad_create_stream_id_internal:<alsasrc:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.606012648  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 3]: deserialized trt engine from :/home/artem/Projects/deepstream_lpr_app/models/LP/LPR/kz_lprnet_baseline18_b16_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

0:00:06.719610153  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 3]: Use deserialized engine model: /home/artem/Projects/deepstream_lpr_app/models/LP/LPR/kz_lprnet_baseline18_b16_fp16.engine
0:00:06.769757854  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_kz.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:10.600116041  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :/home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x480x640       min: 1x3x480x640     opt: 16x3x480x640    Max: 16x3x480x640    
1   OUTPUT kINT32 BatchedNMS      1               min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT BatchedNMS_2    200             min: 0               opt: 0               Max: 0               
4   OUTPUT kFLOAT BatchedNMS_3    200             min: 0               opt: 0               Max: 0               

0:00:10.681265162  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 2]: Use deserialized engine model: /home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine
0:00:10.745713540  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_yolov4-tiny_us.txt sucessfully
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:12.079953950  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:12.160647801  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/artem/Projects/deepstream_lpr_app/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine
0:00:12.181571034  5517 0xaaab09ae7330 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 4 
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
0:00:12.333247432  5517 0xaaab09305120 WARN          v4l2bufferpool gstv4l2bufferpool.c:1114:gst_v4l2_buffer_pool_start:<nvvideo-h264enc:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:12.458641164  5517 0xaaab17c8eaa0 WARN                 v4l2src gstv4l2src.c:914:gst_v4l2src_create:<source> Timestamp does not correlate with any clock, ignoring driver timestamps

1.This is the GPU usage of test2 on AGX orin. GR3D_FREQ is the GPU usage percentage. I set the profile to MAXN (frq 1.3G), and the GPU usage is about 30-40%.

Please try setting it to MAXN and observe the GPU usage during program running.

Your Xavier NX GPU runs at 200-400Mhz, but the usage is lower. This data may be problematic. You also can try using jtop GitHub - rbonghi/jetson_stats: 📊 Simple package for monitoring and control your NVIDIA Jetson [Orin, Xavier, Nano, TX] series

GR3D_FREQ 22%@[1300,1290] NVENC off NVDEC 7%@115 NVJPG off NVJPG1 off VIC 44%@115 OFA off NVDLA0 off NVDLA1 off PVA0_FREQ off APE 174 cpu@49.406C soc2@46.218C soc0@45.781C gpu@44.093C tj@49.5C soc1@44.5C VDD_GPU_SOC 7268mW/7125mW VDD_CPU_CV 1530mW/1530mW VIN_SYS_5V0 4543mW/4543mW
GR3D_FREQ 34%@[1300,1287] NVENC off NVDEC 46%@115 NVJPG off NVJPG1 off VIC 46%@128 OFA off NVDLA0 off NVDLA1 off PVA0_FREQ off APE 174 cpu@49.312C soc2@46.343C soc0@45.75C gpu@44.343C tj@49.312C soc1@44.656C VDD_GPU_SOC 7268mW/7141mW VDD_CPU_CV 1530mW/1530mW VIN_SYS_5V0 4543mW/4543mW
GR3D_FREQ 43%@[1300,1300] NVENC off NVDEC 16%@115 NVJPG off NVJPG1 off VIC 25%@115 OFA off NVDLA0 off NVDLA1 off PVA0_FREQ off APE 174 cpu@49.5C soc2@46.218C soc0@45.718C gpu@44.281C tj@49.437C soc1@44.593C VDD_GPU_SOC 7268mW/7153mW VDD_CPU_CV 1530mW/1530mW VIN_SYS_5V0 4543mW/4543mW
GR3D_FREQ 27%@[1300,1300] NVENC off NVDEC 51%@115 NVJPG off NVJPG1 off VIC 59%@140 OFA off NVDLA0 off NVDLA1 off PVA0_FREQ off APE 174 cpu@49.625C soc2@46.25C soc0@45.968C gpu@44.593C tj@49.625C soc1@44.531C VDD_GPU_SOC 7268mW/7164mW VDD_CPU_CV 1530mW/1530mW VIN_SYS_5V0 4543mW/4543mW

2.Your v4l2src is set to 60 fps, but nvstreammux batched-push-timeout is 40ms. Please refer to the following two FAQs to adjust the parameters

3.If you don’t use any deepstream elements and run a pipeline like the one below, are the audio and video outputted in sync? If it’s out of sync, it could be a driver issue.

gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! x264enc bitrate=2000 speed-preset=superfast tune=zerolatency ! h264parse ! queue ! mux. alsasrc device=default ! audioconvert ! audioresample ! audio/x-raw,rate=44100,channels=2 ! avenc_aac bitrate=128000 ! aacparse ! queue ! mux. qtmux name=mux ! filesink location=output.mp4
  1. set max performance power mode and activated jetson_clocks. Still the pipeline stalls. Jtop shows that GPU load is 0%



  2. Set nvstreammux batched-push-timeoutto 1000ms/60fps = 17ms. However, it doesn’t help.

  3. Yes, it works correctly - the audio and video are in sync

Sorry for the delay.

Did the audio capture start later than the video capture? If so, the audio/video GstBuffers with the same PTS of 0 are actually captured at different times, causing playback desynchronization.
Can you share an unsynchronized video file? For deepstream, all elements do not change the PTS of the GstBuffer.

Probably I should clarify my statement.
We are developing a device that measures the sound volume of passing cars. It includes both a camera and a set of microphones. The typical distance between our device and a car is about 30 meters.

Since the speed of sound is approximately 343 m/s, it takes around 30 / 343 ≈ 87 milliseconds for the sound to travel from the car to our microphones.

During that 87 ms, a car moving at 20 m/s will travel about 1.7 meters forward. This means that when we overlay the sound measurement on the video frame taken at the moment the sound arrives, the visual marker appears behind the actual car position.

To make the overlay appear on the car (i.e., where it was when the sound was emitted), we need to delay the video by approximately 87 ms to synchronize it with the incoming sound.

So my question is how to delay the video.
I tried adding min-threshold-buffers property to a queue before the nvosd, but it resulted in stalled pipeline as I mentioned earlier.

Got it, but adding the min-threshold-buffers=5 property to a queue only delays playback and does not solve the time synchronization problem in the recorded files. Also note that your pipeline does not render audio.

If you need delay video rendering, using identity ts-offset=87000000 to fix timestamp is a better approach

1.Fix video track time offset of the recorded file use ffmpeg

ffmpeg -i output.mp4 -itsoffset 0.087 -i output.mp4 -map 1:v -map 0:a -c copy output-video-delayed.mp4
ffprobe -show_packets -of xml -i output-video-delayed.mp4|more  

The timestamp of the video track has been shifted by 87ms

<packet codec_type="audio" stream_index="1" pts="0" pts_time="0.000000" dts="0" dts_time="0.000000" duration="1024" duration_time="0.023220" size="71" pos="48" flags="K_"/>
<packet codec_type="audio" stream_index="1" pts="1024" pts_time="0.023220" dts="1024" dts_time="0.023220" duration="1024" duration_time="0.023220" size="163" pos="119" flags="K_"/>
<packet codec_type="audio" stream_index="1" pts="2048" pts_time="0.046440" dts="2048" dts_time="0.046440" duration="1024" duration_time="0.023220" size="199" pos="282" flags="K_"/>
<packet codec_type="audio" stream_index="1" pts="3072" pts_time="0.069660" dts="3072" dts_time="0.069660" duration="1024" duration_time="0.023220" size="159" pos="481" flags="K_"/>
<packet codec_type="video" stream_index="0" pts="1044" pts_time="0.087000" dts="1044" dts_time="0.087000" duration="3012" duration_time="0.251000" size="25320" pos="640" flags="K_"/>

2.You can also use gst-pad-set-offset, this is sample code.

import gi
import signal
import sys

gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib

Gst.init(None)

pipeline = None
loop = None

def signal_handler(signum, frame):
    if pipeline:
        print("Sending EOS event to pipeline...")
        pipeline.send_event(Gst.Event.new_eos())
    
def bus_call(bus, message, loop):
    if message.type == Gst.MessageType.EOS:
        print("End of stream reached.")
        loop.quit()
    elif message.type == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        print('Error:', err, debug)
        loop.quit()
    return True

pipeline_desc = '''
    v4l2src device=/dev/video0 !
    video/x-raw,format=YUY2,width=640,height=360,framerate=30/1 !
    videoconvert !
    x264enc bitrate=2000 speed-preset=superfast tune=zerolatency !
    h264parse config-interval=-1 !
    queue name=videoqueue !
    mux.
    alsasrc device=default !
    audioconvert ! audioresample ! audio/x-raw,rate=44100,channels=2 !
    avenc_aac bitrate=128000 ! aacparse ! queue ! mux.
    qtmux name=mux ! filesink location=output.mp4
'''

pipeline = Gst.parse_launch(pipeline_desc)

signal.signal(signal.SIGINT, signal_handler)

pipeline.set_state(Gst.State.READY)

videoqueue = pipeline.get_by_name('videoqueue')
queue_srcpad = videoqueue.get_static_pad('src')
if queue_srcpad is not None:
    queue_srcpad.set_offset(87000000)  # 87ms
    print("Video pad offset set to 87ms")

loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)

pipeline.set_state(Gst.State.PLAYING)

print("start recording... please quit with Ctrl+C")

try:
    loop.run()
except KeyboardInterrupt:
    print("\n keyboard interrupt received, stopping...")
except Exception as e:
    print(f"error: {e}")
finally:
    print("stopping pipeline...")
    if pipeline:
        pipeline.set_state(Gst.State.NULL)
    print("resources released, exiting.")

Great, gst-pad-set-offset works for me.
@junshengy, thank you very much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.