We mounted the Jetson Nano in the car to function as a real-time object detection with GPS coordinates. We bought a GPS USB-dongle and had made it work with the Nano. For the object detection, we are using the Object DetectorYolo (tiny yoloV3 config) reference app of Deepstream 4.0.1.
Our main problem is combining the inference info with their corresponding GPS coordinates. There is a very big delay in terms of recorded inference vs. assignment of coordinates. Crudely, what we have our script check each of the output files on the gie-kitti folder, all those with inference it pairs them with the current coordinates and send to our cloud server. Our script is not able to catch up with the speed at which gie-kitti files are created and the files pile up.
We think we can improve on this if we can:
control the creation of the output file- like only create a file if there is detection.
integrate the GPS coordinates to the output file
are there any APIs available for this specific reference file?
Hi,
We have existing implementation in deepstream-app and cover most general usecases. There are detail in the document. It runs most cases by simply modifying config files.
This usecase is not in coverage and you need to customize deepstream-app. The following suggestions could help and other users may also share experiences.
You could have been confused about the output file I was referring to. What I meant was the gie-kitti output file, not the video output file.
Anyway, we got your cue to customize the deepstream-app source code and recompiled it. We saw that the portion of bbox_params_meta…=fopen…“w”) Line 291 was the code that makes the files. So we added the condition that if the l_obj is not null, it will only be then when the file will be created.
What we are trying now is to incorporate the GPS data on the gie-kitti dump. Now that we know where to edit, we now have a starting point. However, if you have any other suggestions, please let me know.