I have a question about using Python version jetson-inference (https://github.com/dusty-nv/jetson-inference) utilities in production setting.
Unlike this example (https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/detect_objects_webcam.py), jetson-inference seems rather simpler.
- I am wondering if this is due to the fact that jetson-inference is a slimmed down or jetson device dedicated version of it. And maybe thats why the implementation is simpler
- The API reference shared here - a. https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.utils.html & b. https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.inference.html They don’t seem to cover in-depth information. For example, loadImage() from jetson-utils, it doesn’t really say much about what format is accepted. I kind of had to try and check another link out to figure out what I had to do - https://github.com/dusty-nv/jetson-utils/tree/master/python/examples
All in all, I wanted to ask NVIDIA’s team to see if using jetson-inference in production environment is accepted or not.
Would appreciate your feedback.