Reducing Temporal Noise on Images with NVIDIA VPI on NVIDIA Jetson Embedded Computers

Originally published at: Reducing Temporal Noise on Images with NVIDIA VPI on NVIDIA Jetson Embedded Computers | NVIDIA Developer Blog

The NVIDIA Vision Programming Interface (VPI) is a software library that provides a set of computer-vision and image-processing algorithms. The implementations of these algorithms are accelerated on different hardware engines available on NVIDIA Jetson embedded computers or discrete GPUs. In this post, we show you how to run the Temporal Noise Reduction (TNR) sample application…

I hope you had a great start with VPI and is already perceiving the advantages the API is able to deliver. Feel free to bring up points you’d like to discuss and we will gladly support you in your journey towards an optimal and optimized implementation of your application. For more dedicated discussions, start a thread under the section for your chosen Jetson platform in our forum, as in the link below:

Hi! This reply might seem abrupt, but it is so difficult for us to contact anyone who is developing VPI. So firstly, I apologize for any inconvenience that may occur.

My team in CMU and I are currently trying to incorporate VPI into our project (see our work using VPI on Toward Efficient and Robust Multiple Camera Visual-inertial Odometry - YouTube for ICRA 2022). We use Harris Corner Detector and Pyramidal LK Optical Flow in our work. During our work we notice some issues with these two modules:

  1. The distribution of feature points detected by VPI is not as uniform as that of goodfeaturetotrack in OpenCV, even though we apply a quad-tree algorithm to filter the feature points. Such nonuniformity makes the performance of our VPI based system worse than the OpenCV based system. Can you try to improve the Harris Corner Detector, making the distribution more uniform? Or can you implement a new module using Shi-tomasi
    features like goodfeaturetotrack in OpenCV? We will consider using this new module as our feature detector if it yields a more uniform distribution.

  2. The feature tracker (Pyramidal LK Optical Flow) is not stable. More points are lost during tracking compared to the calcOpticalFlowPyrLK in OpenCV. We prefer a more stable tracking performance. Therefore, can you improve the tracking stability?

Besides, there is a tiny bug related to the LK tracker. On the VPI official website, it says that the key point is not being tracked if the tracking status is zero. However, during our tests, we found that the status is actually one if the key point is not tracked.

I really appreciate it if someone can spot this message and help us fix the problems mentioned above. It would be better if someone can contact us for further development on VPI. We are also looking forward to further cooperation with VPI developers. Here is my email: 118010095@link.cuhk.edu.cn.

Looking forward to your reply!

Thank you for your interest in VPI and your report. NVIDIA developers forum is the right place to contact VPI developers and ask questions like this.

Which backend are you using for Harris and feature tracker?

Harris Corner Detector can’t guarantee uniformity in general, as it returns keypoints whose score is at least the strengthThresh (default == 20). These come from salient regions in the image, and its distribution basically depends on the image contents. I’d suggest you to play with values in VPIHarrisCornerDetectorParams and see if you can get better results. We’ll evaluate your suggestion of using Shi-Tomasi criteria for scoring function, thereby improving results.

Regarding the feature tracker, it is a known issue that PVA backend loses track rather quickly. We didn’t see anything abnormal with the CPU and CUDA implementations, though. We’ve noticed, however, that results are sensitive to the values given to VPIKLTFeatureTrackerParams. I’d suggest to play with these parameters too and see if you get the tracking quality you’re after.

Excited to see your response! We are using CUDA/GPU as backend.

We have been playing with the parameters for quite a long time, and the best results we can get are still below our expectation.

Besides, Is there any possibility for our team to cooperate with the VPI developer teams and make contributions to VPI? Or is there any possibility for you to make VPI open-source so that we can make our own modification on the modules? If your team has interests, you could send the contact information through the email I provided in the previous message.