Regarding dewarper sample to dewarp 360d cams

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 4.0
• TensorRT Version 6
• NVIDIA GPU Driver Version (valid for GPU only) 440

Hi, I’m currently trying to make sense out of the 360d dewarper sample app, I don’t have issues running them, but the issue that I’m facing right now is unable to make sense of anything out the app. Can there be a simpler explanation around the app and the projection jargons used inside. I’ve gone through almost all the post here in the forum regarding the dewarp plugin, the gstreamer doc seems to make some sense but most of the things in the app are blurry. Surfaces, Projection type etc… I was expecting to change the max 4 surfaces option, wanted to check what happens when 1 is given it crashes. Not sure where to look for to understand the sample. I’ve gone through the plugin manual too

Dewarping takes a 360 camera and essentially turns it into 4 separate feeds. It crashes when you change max surfaces to 1 because you need to allocate at least 4 surfaces per 1 360 camera to correctly dewarp that 360 feed into 4 separate feeds.

how are the parameters for dewarping set in the config file? Any idea on it. I’m not sure how to set each surface param in the cfg?

Hi @beefshepherd,
Could you take a look the doc in below link?

Refer to 360-d smart garage parking application - GitHub - NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application: Describes the full end to end smart parking application that is available with DeepStream 5.0 and the doc : DeepStream_Analytics_Applications.pdf in the project.

Thanks!

Although the aforementioned documentation is extensive, it does not state how to do the dewarping itself. What projections are use, how are the parameters infered from the lens e.a. If I want to deploy a camera with a fish-eye lense, but which is not 360°, how should one go about this?

Hi jens.buysse,

Please help to open a new topic if it’s still an issue.

Thanks