Gstreamer use nvdewarper frame lag and tear

Jetson Info:

Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: Jetson AGX Orin - Jetpack 5.1.1 [L4T 35.3.1]
NV Power Mode[0]: MAXN
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - Module: Check with sudo
Platform:
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.104-tegra
jtop:
 - Version: 4.2.12
 - Service: Active
Libraries:
 - CUDA: 11.4.315
 - cuDNN: 8.6.0.166
 - TensorRT: 8.5.2.2
 - VPI: 2.2.7
 - Vulkan: 1.3.204
 - OpenCV: 4.10.0 - with CUDA: YES

Hi,I use theese shell script to dewarper my fisheye 1920*1080 30FPS camera

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! \
queue max-size-buffers=8 max-size-time=0 max-size-bytes=0 ! \
nvvideoconvert ! \
nvdewarper config-file=/home/ubuntu/workspaces/deep_stream_view/DeepStream-gi/configs/config_dewarper.txt source-id=6 ! \
m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 num-surfaces-per-frame=1 ! \
nvmultistreamtiler ! \
nv3dsink sync=false max-lateness=5000000

Config config_dewarper.txt:
config_dewarper.txt (568 Bytes)

But the frame is tear,
How can I fix it?
Thanks!

Tested the pipeline with JetPack 6.1 and DeepStream 7.1 on AGX Orin with our own USB camera, no issue found.

Can you monitor the hardware performance during you running the pipeline with the command “tegrastats”?

Here is my logs,
Thanks!
tegrastats_log.txt (66.9 KB)

Seems VIC loading is a little bit high.

Have you max the power and clock as Performance — DeepStream documentation?

Can you also lock the max clock according to VPI - Vision Programming Interface: Performance Benchmark?

Lock to MAX Clock:

ubuntu@EAC-5000:~/workspaces/deep_stream_view/DeepStream-gi/scripts$ sudo ./lock_jetson_clock_to_max.sh --max
[sudo] password for ubuntu: 
Storing system configuration in /tmp/defclocks.conf

Set MAXN Power and Clock :
sudo jetson_clocks --show

SOC family:tegra234  Machine:Jetson AGX Orin
Online CPUs: 0-7
cpu0: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu1: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu2: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu3: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu4: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu5: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu6: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
cpu7: Online=1 Governor=schedutil MinFreq=2188800 MaxFreq=2188800 CurrentFreq=2188800 IdleStates: WFI=0 c7=0 
GPU MinFreq=930750000 MaxFreq=930750000 CurrentFreq=930750000
EMC MinFreq=204000000 MaxFreq=3199000000 CurrentFreq=3199000000 FreqOverride=1
DLA0_CORE:   Online=1 MinFreq=0 MaxFreq=1408000000 CurrentFreq=1408000000
DLA0_FALCON: Online=1 MinFreq=0 MaxFreq=742400000 CurrentFreq=742400000
DLA1_CORE:   Online=1 MinFreq=0 MaxFreq=1408000000 CurrentFreq=1408000000
DLA1_FALCON: Online=1 MinFreq=0 MaxFreq=742400000 CurrentFreq=742400000
PVA0_VPS0: Online=1 MinFreq=0 MaxFreq=704000000 CurrentFreq=704000000
PVA0_AXI:  Online=1 MinFreq=0 MaxFreq=486400000 CurrentFreq=486400000
FAN Dynamic Speed control=inactive hwmon5_pwm1=255
NV Power Mode: MAXN

tegrastats_log:
tegrastats_log.txt (48.2 KB)

Thanks!!!

Have you max the power and clock as Performance — DeepStream documentation?

Yes,I runned sudo nvpmodel -m 0
And reboot jetson.

Can you upgrade to JetPack 6.1 and DeepStream 7.1?

My JetPack is for custom BSP camera,
The JetPack 6.0 not stable with camera,
So I still use Jetpack 5.1.1.

Have you tried JetPack 6.1?

I try update to deepstream 7.1,
But now have new problems,
I try run theese pipeline:

GST_DEBUG=4 gst-launch-1.0 \
    nvstreammux width=1920 height=1080 batch-size=4 live-source=1 name=mux ! \
    nvmultistreamtiler rows=2 columns=2 width=1920 height=1080 ! \
    nvvideoconvert ! nv3dsink  \
    v4l2src device=/dev/video0 ! tee name=t0 \
        t0. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=640,height=480" ! mux.sink_0 \
        t0. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert !  videorate max-rate=5 ! nvv4l2h264enc ! h264parse ! splitmuxsink location="videos/camera_0/output_%010d.mp4" max-size-time=10000000000  \
    v4l2src device=/dev/video1 ! tee name=t1 \
        t1. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=640,height=480" ! mux.sink_1 \
        t1. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! videorate max-rate=5 ! nvv4l2h264enc ! h264parse ! splitmuxsink location="videos/camera_1/output_%010d.mp4" max-size-time=10000000000  \
    v4l2src device=/dev/video2 ! tee name=t2 \
        t2. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=640,height=480" ! mux.sink_2 \
        t2. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! videorate max-rate=5 ! nvv4l2h264enc ! h264parse ! splitmuxsink location="videos/camera_2/output_%010d.mp4" max-size-time=10000000000  \
    v4l2src device=/dev/video3 ! tee name=t3 \
        t3. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=640,height=480" ! mux.sink_3 \
        t3. ! queue max-size-buffers=800 leaky=downstream ! nvvideoconvert ! videorate max-rate=5 ! nvv4l2h264enc ! h264parse ! splitmuxsink location="videos/camera_3/output_%010d.mp4" max-size-time=10000000000 2>&1 | tee -a gst_error_log.txt

And theese is GST_DEBUG=4 error log:
gst_error_log.txt (206.1 KB)

GST_DEBUG=3 have this error:

0:00:00.155997536 13140 0xaaab242332a0 WARN                    v4l2 gstv4l2object.c:4682:gst_v4l2_object_probe_caps:<nvv4l2h264enc3:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1

My camera info:

How can I fix it?
Thanks!

This is just a warning. It is OK.

The failure happens with the pipeline you post in another topic Multiple cameras use nvdewarper

Can the original pipeline work with your DeepStream 7.1?

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! \
queue max-size-buffers=8 max-size-time=0 max-size-bytes=0 ! \
nvvideoconvert ! \
nvdewarper config-file=/home/ubuntu/workspaces/deep_stream_view/DeepStream-gi/configs/config_dewarper.txt source-id=6 ! \
m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 num-surfaces-per-frame=1 ! \
nvmultistreamtiler ! \
nv3dsink sync=false max-lateness=5000000


Yes,The dewarper tear is fix,
I use yolov8n.onnx model,
But why person detetcted osd not on correct places.

And run after a while,
The pipeline be crash,
Some times have cuda runtime error

Have theese error:

 bash '/home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/scripts/GS_4Screen_yolov8n_Detect.sh'
Setting pipeline to PAUSED ...
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.315054528 17963 0xaaaafcb9e200 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/configs/engines/JP_6.1/model_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.315144352 17963 0xaaaafcb9e200 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/configs/engines/JP_6.1/model_b1_gpu0_fp32.engine
0:00:00.320719616 17963 0xaaaafcb9e200 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:config_infer_primary_yoloV8.txt sucessfully
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

GPUassert: an illegal memory access was encountered /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/src/modules/VisualTracker/VisualTracker.cpp 960

!![Exception] GPUassert failed
An exception occurred. GPUassert failed
gstnvtracker: Low-level tracker lib returned error 1
cuGraphicsEGLRegisterImage failed: 700, cuda process stop
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

cuGraphicsEGLRegisterImage failed: 700, cuda process stop
[WARN ] 2024-11-14 06:23:47 (cudaErrorIllegalAddress)
cuGraphicsEGLRegisterImage failed: 700, cuda process stop
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to set device
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(340): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
Execution ended after 0:02:39.990863584
Setting pipeline to NULL ...
Unable to set device in gst_nvstreammux_change_state
Freeing pipeline ...

(gst-launch-1.0:17963): GStreamer-CRITICAL **: 06:23:47.863: 
Trying to dispose element nvdewarper3, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

CUDA runtime error 4 at line 1306 in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdewarper/gstnvdewarper.cpp

(gst-launch-1.0:17963): GStreamer-CRITICAL **: 06:23:47.864: 
Trying to dispose element nv3dsink0, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.


(gst-launch-1.0:17963): GStreamer-CRITICAL **: 06:23:47.864: 
Trying to dispose element mux, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.


(gst-launch-1.0:17963): GStreamer-CRITICAL **: 06:23:47.864: 
Trying to dispose element pipeline0, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.


(gst-launch-1.0:17963): GLib-GObject-WARNING **: 06:23:47.864: invalid unclassed pointer in cast to 'GstElement'
Unable to set device in gst_nvstreammux_src_collect_buffers

(gst-launch-1.0:17963): GStreamer-CRITICAL **: 06:23:47.868: 
Trying to dispose element nvvideoconvert4, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

Use pipeline:


gst-launch-1.0 \
    nvstreammux width=1920 height=1080 batch-size=4 live-source=1 name=mux ! \
    nvinfer config-file-path=config_infer_primary_yoloV8.txt ! \
    nvtracker tracker-width=640 tracker-height=480 gpu-id=0 ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! \
    nvmultistreamtiler rows=2 columns=2 width=1920 height=1080 name=tiler ! \
    nvdsosd ! \
    nvvideoconvert ! \
    nv3dsink sync=false \
    v4l2src device=/dev/video0 ! \
    nvvideoconvert ! \
    nvdewarper config-file=/home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/configs/config_dewarper.txt source-id=6 ! \
    mux.sink_0 \
    v4l2src device=/dev/video1 ! \
    nvvideoconvert ! \
    nvdewarper config-file=/home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/configs/config_dewarper.txt source-id=6 ! \
    mux.sink_1 \
    v4l2src device=/dev/video2 ! \
    nvvideoconvert ! \
    nvdewarper config-file=/home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/configs/config_dewarper.txt source-id=6 ! \
    mux.sink_2 \
    v4l2src device=/dev/video3 ! \
    nvvideoconvert ! \
    nvdewarper config-file=/home/nvidia/workspaces/yolov8-for-jetson/DeepStream-gi/configs/config_dewarper.txt source-id=6 ! \
    mux.sink_3 \

Cuda runtime error:

ARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

cuGraphicsEGLRegisterImage failed: 700, cuda process stop
GPUassert_VPI: VPI_ERROR_INTERNAL (cudaErrorIllegalAddress) /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/src/modules/cuDCFv2/featureExtractor.cu 527

!![Exception] GPUassert_VPI failed
An exception occurred. GPUassert_VPI failed
gstnvtracker: Low-level tracker lib returned error 1
[ERROR] 2024-11-14 06:35:12 Exiting the Stream worker thread failed with exception: VPI_ERROR_INTERNAL: (cudaErrorIllegalAddress)
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[ERROR] 2024-11-14 06:35:12 Error destroying cuda device: �w����
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
[WARN ] 2024-11-14 06:35:12 (cudaErrorIllegalAddress)
terminate called after throwing an instance of 'nv::cuda::RuntimeException'
  what():  cudaErrorIllegalAddress: 
cuGraphicsEGLRegisterImage failed: 700, cuda process stop

Have you measured the GPU usage and GPU memory usage with the pipeline?

Seems the error happens with nvtracker. Have you tried the “deepstream-app” sample /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt ?

Theese is pipeline run to crash tegrastats log:
tegrastats_log.txt (26.5 KB)

Looks ok,
But Delay is very high

Jetpack Info:

nvidia@EAC5k-OrinAGX:~/workspaces/DeepStream-Yolo/DeepStream-Yolo$ jetson_release
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: Vecow EAC-5000 Platform - Jetson AGX Orin - Jetpack 6.1 [L4T 36.4.0]
NV Power Mode[0]: MAXN
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - Module: Check with sudo
Platform:
 - Distribution: Ubuntu 22.04 Jammy Jellyfish
 - Release: 5.15.148-tegra
jtop:
 - Version: 4.2.12
 - Service: Active
Libraries:
 - CUDA: 12.6.68
 - cuDNN: 9.3.0.75
 - TensorRT: 10.3.0.30
 - VPI: 3.2.4
 - Vulkan: 1.3.204
 - OpenCV: 4.8.0 - with CUDA: NO

Since the original issue of this topic is resolved. The new issue can be discussed in Multiple cameras use nvdewarper. Close this topic.

The JackPack 6.1 is not stable on my camera,
Can I keep my data
from Jetpack 5.1.1 upgrade to 5.1.3?

I found someone upgrade 5.1.3 fix the tear problem

Is it have more simple method?
Thanks!