We are evaluating the dewarp example provided as part of the DS4.0 EA package on the Jetson AGX Xavier. The default output when running the commands provided in the README file gives an output of 4 streams. Refer to attached screen capture. The top two surfaces does not appear to be correct as most of the portion is black. Could you please confirm this is the correct and expected output. Please help us, if we are doing something wrong.
From what we understand from this post, the dewarper plugin in DS4.0 appears to have a great many features such as dewarping, mapping image coordinates to global coordinates, transforming this info to metadata, (and more?). While this is a great achievement to have so many features packed into a single plugin, our opinion is that this compromises on ease of use. We were hoping to use the dewarper plugin to remove barrel distortion from a wide FOV camera. However, currently available documentation (or the above-mentioned blog post) does not explain how this can be done easily. We suspect this is done by some of the numerous fields present in the CSV files (nvaisle_2M.csv and nvspot_2M.csv).
Also, the config file (config_dewarper.txt) used by the “deepstream-dewarper-app” uses properties such as “surface”. Any documentation regarding this property, as well the various fields present in the CSV file would be helpful. An overview about the usage of the dewarper plugin would also be great for users.
I’m also trying ot work out the parameters of the config file/ dewarpper plugin.
I’ve started experimenting to understand the relations of the parameters.
I know there is a documentation (basically repeating the name of the parameter as a description) but I guess you have to be already really in the space of video dewarping to understand them.
That would be really useful to get an explanation of the projections used.
For example, how do you “guess” the output width and height depending on the top-angle, bottom-angle, pitch, yaw, roll and focal-length.
Even if it’s an external source explaining the concept of projections (used here PushBroom, VertRadCyl ?).
Yeah, I am also trying to figure this out and it is really hard without detailed documentation. I find it very odd that the aisle and spot functionality is integrated into the dewarper as this prohibits the combined use with “normal” cameras.
Also, [url]https://devblogs.nvidia.com/calibration-translate-video-data/[/url] describes the calibration process only in part, not even mentioning the H* and many other values. One can assume that the H* fields are the values of a homography matrix, but presumably homography calculation should be taken care of by providing the camera- and global coordinates of the quad-polygon?
It’s a pity that this example application, where obviously a lot of work has been put into creating a well architectured reference use case, is not appropriately documented.
One more out here getting confused with dewarp settings. Playing almost blind with the settings for about a day I got something. I am trying to dewarp the output from fiheye camera that is not with 360 angle of view.