Nvdewarper not working properly 2

Continuing the discussion from NvDewarper not working properly:

We see the same issue using nvdewarper to undistort fisheye image as described at previous post, but even more severe. We tested it with original Jetson Orin Nano Dev Kit as well and the same issue occurs. The camera image without using nvdewarper, is stable.

• Jetson Orin Nano 8 Gb
**• **
• DeepStream Version 6.2
• L4T 35.3.1 [ JetPack 5.1.1 ]
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type: bug
• How to reproduce the issue ? Run nvdewarper with fisheye lens camera on Jetson Orin Nano

Sample picture of the defect occuring is attached:

Dewarper config file:
nvdewarper_config.zip (1.4 KB)

Command line:
gst-launch-1.0_command.txt (1.1 KB)

1 Like

It is true, even mine has gotten worse. It is now affecting the performance of my pipeline too.
I am searching for different algorithms since I think this issue is related to NvDewarp.
Even if I use a different camera or even run the pipeline with a video source, the rendering is still like what your image shows.


The issue is not reproduced with Orin AGX. Can you monitor the CPU loading and GPU loading for your case in your board?

Hi, Fiona. Thank you for the reply. The CPU and GPU loading info is added as a picture.
PS! We are using nvpmodel: 7W.

As Orin AGX system has HW encoding and decoding capability, but Orin Nano has only SW encoding and decoding, it could be the cause of why Orin AGX runs well.

1 Like

Hi @mihkel2

I don’t believe the issue is related to the hardware encoders, as there is no encoding or decoding utilized in your pipeline. At first I though it was a resource-related problem, but it doesn’t appear to be caused by either the GPU or CPU, given the low values. I reproduced the issue on the Orin NX and to me it looks like a race condition inside nvdewarper.


Hi, @miguel.taylor
Yes, I agree. Good that you were able to reproduce the issue and we also see that it points towards a problem with nvdewarper.

Can you tell us whether you met such problem with power mode 10w or 15w?

Hi, @Fiona.Chen
We double checked that - this issue is still present while running on 15W as well.

I tried with Orin Nano 15w and 10w, the videos were displayed smoothly. Is there any special settings needed for reproducing the issue?

Have you tried other cameras with such case? We tried with IMX 274

Can you share us the information about your camera?


Can you share us your methods of reproducing such issue?

Hi, Fiona.
We are using IMX219 sensor for the camera. This is the exact model - https://www.arducam.com/product/arducam-imx219-wide-angle-camera-module-for-nvidia-jetson-nano-raspberry-pi-compute-module-4-3-3-b0287/
Without nvdewarper the stream is good so the HW connections to cameras are suitable

I used this pipeline on the Orin NX with the IMX477:

gst-launch-1.0 \
nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" ! \
nvvidconv ! 'video/x-raw(memory:NVMM),width=820,height=616' ! tee name=t \
t.src_0 ! queue ! nvdewarper config-file=/home/nvidia/config_center.txt source-id=1 ! m.sink_0 \
t.src_1 ! queue ! nvdewarper config-file=/home/nvidia/config_left.txt source-id=2 ! m.sink_1 \
t.src_2 ! queue ! nvdewarper config-file=/home/nvidia/config_right.txt source-id=3 ! m.sink_2 \
t.src_3 ! queue ! nvdewarper config-file=/home/nvidia/config_bottom.txt source-id=4 ! m.sink_3 \
nvstreammux name=m width=820 height=616 batch-size=4 num-surfaces-per-frame=1 ! \
nvstreamdemux name=demux \
demux.src_0 ! nvvidconv ! xvimagesink \
demux.src_1 ! nvvidconv ! xvimagesink \
demux.src_2 ! nvvidconv ! xvimagesink \
demux.src_3 ! nvvidconv ! xvimagesink

The config files are the same ones shared on the original post.

The issue became apparent only when I moved my hand rapidly in front of the camera.

Yes, that is true.
In our cases, people will keep moving so we really want to fix this issue.
I am not sure if I can fix it by just adjusting the config files.

@Fiona.Chen I tried the same pipeline on a fresh jetson but this issue still persists.

The issue seems to be fixed by upgrading the JetPack from 5.1.1 to 5.1.2.

Here is what I did to resolve the issue, I didn’t want to start from a fresh install, so what I did was:

  1. Edited /etc/apt/sources.list.d/nvidia-l4t-apt-source.list to point to the 35.4 repo:
deb https://repo.download.nvidia.com/jetson/common r35.4 main
deb https://repo.download.nvidia.com/jetson/t186 r35.4 main
  1. Then used the following commands:
$ sudo apt update
$ sudo apt dist-upgrade

With the dist-upgrade, I encountered:

Warning: Not all of the space available to /dev/nvme0n1 appears to be used. You can fix the GPT to use all of the space (an extra 6 blocks) or continue with the current setting? 

What I did:

Fix/Ignore? Fix
Partition number? 1
Partition name? [APP]? (just pressed Enter)

After it finished, I rebooted the system, and it started to update the UEFI firmware from 3.1 to 4.1. After it finished the FW update, the system became unbootable.

To fix the unbootable system, I did the following:

  1. Made a fresh install onto a separate SSD through SDK Manager.
  2. Copied all the files from the fresh install /boot directory (except /boot/extlinux) into the upgraded SSD /boot directory.
  3. Removed the initrd.img-5.10.104-tegra from the upgraded SSD /boot directory.
  4. Made sure that the initrd.img-5.10.120-tegra file from the fresh install SSD was present in the upgraded SSD /boot directory and also in the initrd.img file.
  5. Then I booted up the system and checked if it booted up with the new kernel using the following command:
$ uname -r

The console output was 5.10.120-tegra, so I was certain that it was using the new kernel.

Now the system booted and was running on JetPack 5.1.2. The next thing I wanted to do was update DeepStream from 6.2 to 6.3:

  1. I downloaded the DeepStream 6.3 Jetson tar package deepstream_sdk_v6.3.0_jetson.tbz2 from DeepStream | NVIDIA NGC to the Jetson device.
  2. Then I entered the following commands to extract and install the DeepStream SDK:
$ sudo tar -xvf deepstream_sdk_v6.3.0_jetson.tbz2 -C /
$ cd /opt/nvidia/deepstream/deepstream-6.3
$ sudo ./install.sh
$ sudo ldconfig

After all this was done, I retried the same pipeline with the same config files, and the issue seems to be resolved.

Hi, @Raido.Kislov .
Thank you very much. We validated the solution and confirm that this upgrade did the fix.
@rajupadhyay59 - does it work for you as well?

@mihkel2 I have yet to try this.
@miguel.taylor Thank you for the solution. So I should upgrade the Jetpack version to 5.1.2 and also use Deepstream 6.3, right?
I’ll try this. Thanks again.

@mihkel2 @miguel.taylor

I flashed my jetson with JP 5.1.2 and Deepstream 6.3
Did I miss something because for me, the issue still persists?

What camera are you using? Connection type?
Are you using Developer Kit or your own custom carrier board?
And the pipeline is exactly the same?
Have you done $ sudo apt update and $ sudo apt dist-upgrade ?
Make sure everything is up to date.

@Raido.Kislov Hi, thanks for reply.

  1. Camera is Ricoh Theta 360°. I do not think it is related to camera because I have tried with webcams and with videos too.

  2. It is a developer kit. Not a custom carrier board.

  3. Yes the pipeline is the same.

  4. $ sudo apt update and $ sudo apt dist-upgrade

    We have decided to drop nvdewarper for the time being.
    This ticket can be closed, thank you.