Question about using jetson-inference in production

Hi,

I have a question about using Python version jetson-inference (https://github.com/dusty-nv/jetson-inference) utilities in production setting.

Unlike this example (https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/detect_objects_webcam.py), jetson-inference seems rather simpler.

  1. I am wondering if this is due to the fact that jetson-inference is a slimmed down or jetson device dedicated version of it. And maybe thats why the implementation is simpler
  2. The API reference shared here - a. https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.utils.html & b. https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.inference.html They don’t seem to cover in-depth information. For example, loadImage() from jetson-utils, it doesn’t really say much about what format is accepted. I kind of had to try and check another link out to figure out what I had to do - https://github.com/dusty-nv/jetson-utils/tree/master/python/examples

All in all, I wanted to ask NVIDIA’s team to see if using jetson-inference in production environment is accepted or not.

Would appreciate your feedback.

Thanks

Hi @a428tm, the jetson-inference project is open-source and licensed under the permissive MIT license, so you are welcome to use it in production if desired.

I think since jetson-inference is organized into libraries and high-level primitives (like imageNet/detectNet/ect) that may have simplified the API somewhat. Also much of the heavy-lifting is done in C++ which makes the Python bindings simpler (and of nearly equal performance to the base C++ implementation)

Other than the project name, there isn’t much on the inferencing side that is particular to Jetson (as opposed to PC/dGPU). In jetson-utils, a bunch of the multimedia stuff (like camera and encoder/decoder) is particular to Jetson however.

Sorry, yes I have been meaning to improve the Python API docs. If you look at the examples or C++ docs, those are better documented. Or when in doubt, you can look at the Python bindings which call the C++ functions.

1 Like

@dusty_nv

Thank you for the quick response.

When I go through them, I will make sure to check both docs out to get a good understanding of it.

If you don’t mind, I had 2 more questions for you.

  1. Based on what I read, I think jetson-inference cannot be combined with models trained from Transfer Learning Toolkit. Is my understanding correct? (https://github.com/dusty-nv/jetson-inference/issues/546). I did one of the tutorials on jetson-inference repo (https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md) to retrain and deploy on NX device; however, I was hoping to utilize DetectNet_v2 models like TrafficCam.
  2. For the above example of pytorch retraining, my understanding is that it is very similar to this - https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/torch2trt/torch2trt.py#L278 If I want to retrain existing Tensorflow model, if I used this method, TF-TRT (https://github.com/NVIDIA-AI-IOT/tf_trt_models), to retrain, could I also use this model with jetson-inference (the same way that you used pytorch).

I would appreciate any feedback/pointer.

Thankyou!

That is correct, there isn’t yet support for the TLT models inside jetson-inference. I hope to add it at some point, however you can currently use DeepStream for inferencing of TLT models.

The PyTorch training workflow from jetson-inference uses ONNX (PyTorch->ONNX->TensorRT) as opposed to torch2trt (which goes from PyTorch->TensorRT directly). torch2trt handles the inferencing when using torch2trt.

Similarly, TF-TRT goes from Tensorflow->TensorRT, and TF-TRT handles the inferencing. You would probably want to export your TensorFlow model to ONNX if you wanted to try to run it with jetson-inference. Running new models in jetson-inference typically requires adjusting the pre/post-processing code (like found in jetson-inference/c/detectNet.cpp) so that it knows how to format the input tensors and how the output tensors should be interpreted for that model.

@dusty_nv

Thank you so much for the quick response

Due to hardware limitation, we cannot implement Deepstream (camera doesnt support gstreamer or v4l2).

So for TF to ONNX, did you mean following steps stated in this blog? https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorflow-onnx-and-tensorrt/

In your expert opinion, do you think following these steps would allow me to convert existing TF models into usable model within jetson-inference?

Thanks again for always sharing detailed feedback!

Best,
Jae

Hi Jae, I haven’t personally tried TensorFlow->ONNX before (as I use PyTorch to ONNX), but it looks like you could try that tf2onnx tool used in the blog. Then you would want to see if the trtexec utility could parse/load that ONNX file (you can find trtexec under /usr/src/tensorrt/bin on your Jetson).

If the ONNX way doesn’t work, there is another TensorFlow->UFF->TensorRT way that you can find demonstrated here for object detection model (SSD-Mobilenet): https://github.com/AastaNV/TRT_object_detection

The UFF conversion can require some additional steps / configuration of the model however.
The fallback from there would then be to use TF-TRT for the inferencing.

@dusty_nv

Thanks for the thorough explanation. Unfortunately, it was a bit tricky for me and I coudln’t figure it out.

RTSP camera upgrade was in the works, so I am going to try working wtih RTSP and DeepStream for now and when I have more time, I will jump back in.

Thanks again.

Best,
Jae