• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 4.0 • TensorRT Version 6 • NVIDIA GPU Driver Version (valid for GPU only) 440
Hi, I’m currently trying to make sense out of the 360d dewarper sample app, I don’t have issues running them, but the issue that I’m facing right now is unable to make sense of anything out the app. Can there be a simpler explanation around the app and the projection jargons used inside. I’ve gone through almost all the post here in the forum regarding the dewarp plugin, the gstreamer doc seems to make some sense but most of the things in the app are blurry. Surfaces, Projection type etc… I was expecting to change the max 4 surfaces option, wanted to check what happens when 1 is given it crashes. Not sure where to look for to understand the sample. I’ve gone through the plugin manual too
Dewarping takes a 360 camera and essentially turns it into 4 separate feeds. It crashes when you change max surfaces to 1 because you need to allocate at least 4 surfaces per 1 360 camera to correctly dewarp that 360 feed into 4 separate feeds.
Although the aforementioned documentation is extensive, it does not state how to do the dewarping itself. What projections are use, how are the parameters infered from the lens e.a. If I want to deploy a camera with a fish-eye lense, but which is not 360°, how should one go about this?