Video data to real world geolocation

Hi all,

I tried to do this example a while back but was told that it was not supported with the Jetson Nano " Using Calibration to Translate Video Data to the Real World"

Is that still the case?

If so can I still use a camera ‘calibration file’ csv table to perform something similar on the jetson nano?

Thanks

Hi,
We have a sample in

deepstream-5.0\sources\apps\sample_apps\deepstream-dewarper-test

You may install DS5.0 and give it a try.

Thanks Dane, just wanted to be sure that would work on the nano before I went through the trouble… This is new with deepstream5? Sorry just have difficulty keeping up with all the nuances not really a chip guy.

Also one more question. I was looking at this structure

DsEventMsgMeta on the following link:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/DeepStream%20Development%20Guide/baggage/structNvDsEventMsgMeta.html

and noticed there is a NvDsCoordinate and NvDsGeoLocation. what is the piece of code that actually runs the coordinate to geo processing?

The documentation just seems to say what they are but not more. Any more pointers would be much appreciated.

Thanks for the help.

Hi,
The deepstream-dewarper-test is in the package from 4.0.2. You may try the one in DS4.0.2 or DS5.0 DP(developer preview). We are going to release DS5.0 GA soon.

These are defined in

deepstream-5.0/sources/includes/nvdsmeta_schema.h

Currently there is no sample to demonstrate the functionailty.