360 Degree Camera Config

Hardware Platform (Jetson / GPU): Jetson Nano
DeepStream Version: 5.1
JetPack Version (valid for Jetson only): 4.5.1-b17
TensorRT Version: 7.1.3-1+cuda10.2

Hey!
I’ve integrated nvdewarper into my project which takes RTSP streams as an input for people counting. Can anyone recommend the exact config settings I should be using for a video like this? Axis 360-degree fixed surveillance camera - YouTube

Thanks in advance!

Cheers,
Conall

Are you using our deepstream-dewarper-test sample app? Have you tried the “config_dewarper.txt” in the /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-dewarper-test folder? There is also README file in this folder to instruct you to run the app.

1 Like

I’m also facing the same problem. We have different cameras and different lenses we want to test with different fish-eye settings. We want to dewarp the fish-eye images to a 2D plane, but it is unclear for us how to set the parameters correct for the different lenses. Where can we find this documentation?

1 Like

I also made a topic on this check it out. I have successfully dewarped for vertical cylindrical mapping but it seems in your case you need to configure the PushBroom dewarper.

1 Like

Hey Fiona!
I tried those but it doesn’t work great. Intent is to have just one image for the tracking of people. I’ve tried a couple of approaches (screenshot attached) but they struggle with tracking. Currently using the PeopleNet v2.1 model which may not be great for overhead.

Cheers,
Conall

Hello

Could you elaborate how you got these paramters for dewarping? What method do we need to use?

We have a 180° lens [1] which we want to dewarp into a 2D image, only it is unclear how we should set all those paramters (top-angle, distortion …)

[1] https://www.arducam.com/product/arducam-180-degree-fisheye-1-2-3-m12-mount-with-lens-adapter-for-raspberry-pi-high-quality-camera/

1 Like

Hey Jens,
Honestly it’s been mostly trial and error. I turned off inference to speed up testing dewarping configs.

Try with a single surface and make sure streammux is set to same width/height for testing.

Let me know how you get on.

Cheers,
Conall

Hey Conall

Thanks for the quick reply. Am I correct that the following strategy is the way to go:

  1. Print a checkerboard pattern
  2. Film it
  3. Set projection type to 1 (180° lens)
  4. Play with distortion parameter (not with the following surface-index=
    width,height,top-angle,bottom-angle,pitch,yaw,roll,focal-lengt?
    until we get a good enough picture?

What do you mean with streammux is set to same width/height?

Kind regards
Jens Buysse

Sorry! Probably best to confirm this with @Fiona.Chen or someone at Nvidia! I’m learning too :)

I played around with all parameters just to try to change the angle to be something more frontal than overhead so it would work with the model better. Alternative may be to have a model that is designed specifically for overhead detection of people.

Another question for you or @bcao. How can I rotate the frame in this example (Nvdewarper dewarp 360 into one image - #7 by batzor) so it can be used for inference? 90 degree counter-clockwise/270 degree clockwise.

We have new sample of dewarper here: GitHub - NVIDIA-AI-IOT/Deepstream-Dewarper-App: This project demonstrate how to infer and track from a 360 videos by using the dewarper plugin.

Can you take a look whether it can help?

Thanks Fiona! This helps a lot. Do you know when the Jetson libraries will be out? Says it is only x86 at the moment

Hi @conall.laverty1

The Jetson libraries are out now. You can get them here

1 Like

Thanks @mdesta! 😊