•Hardware Platform (Jetson / GPU): Deployment on Jetson Xavier NX and development on NVIDIA 2080Ti based Ubuntu machine • NVIDIA GPU Driver Version (valid for GPU only) 2080 Ti on the development machine and the native GPU on NX • Issue Type Question • Requirement details
The development environment is in Ubuntu 20.0.4. In the Jupyter Notebook, I’m able to infer the images using the following script,
Now, How do I write business logic on top of it by feeding images from the OpenCV? Where is this file that has got the inference code?
In Jetson, I have used Deepstream to deploy. But I couldn’t find a simple implementation that shows how I could deploy the custom model on the Jetson using Python API? I have both .tlt,.etlt,.engine files . And how do I see the output as we would with OpenCV using imshow ?
Yes… I want to be able to run it without Deepstream… I have an incoming stream from OpenCV. I would like to know how I can feed it to the model and perform the detection.
But, I’m developing the application on Ubuntu 20.0.4. NVIDIA doesn’t support deep stream for Ubuntu 20.0.4. I have trained the models with Ubuntu 20.0.4. I know that the following command performs inference and displays the image output,
But, I’m not interested in seeing some visual output. I would like to obtain coordinates so I can write other logic on top of the detected objects. How can I even do it? I have been striving for nearly a month.
The ssd inference is not open source. It can only provide bboxes in the output folder.
For your case, suggest you to try to run inference with one of below ways:
Thanks! It would have saved my time and efforts if I knew that it was not open source previously since I wouldn’t have tried training the model with TLT. Finally, we have used the onnxruntime-gpu option. It was simple, easily works with OpenCV and most essentially it is very well documented. The performance was better than Deepstream. Considering the pain in configuring Deepstream + the closed source TLT + conversion of TRT etc…, onnxruntime is way better and straightforward.
I was having the same issue. The fact TLT is so hard to integrate (Due to closed source) in proyects makes it impossible to use, same as the bare answers you get from Nvidia, you need to beg for better and more developed answers… Might as well give ONNX runtime a try.
Actually end users can generate trt engine based on etlt model.
If they want to run inference with deepstream, they can deploy etlt model or trt engine in it.
If they want to run inference with standalone inference way with the trt engine, end user can do some coding with correct preprocesing, postprocessing,etc.
Some examples,
Maybe this should be a new topic, but i was able to create a trt engine with tlt, but i’m getting wrong inferences, i saw another issue related to this but had no success, the pre processing that tlt does looks like a black box, i tried scaling the image with padding, normalizing and changing BGR to RGB but no success. I’m sorry, tensorrt is a great tool but i’m still stuck in this matter for a month and got no progress.
I will save you some time. Delete all deep stream-related dependencies. Train a model with Yolov4 or Yolov5. Export it to ONNX. Install onnxruntime using pip install onnxruntime-gpu and do the inference. It is way faster and less cumbersome. As a bonus, you will save time clicking all the redirect links that NVIDIA has in their forums as if it might help.