How do we write business logic with python on top of the model trained with TLT?

•Hardware Platform (Jetson / GPU): Deployment on Jetson Xavier NX and development on NVIDIA 2080Ti based Ubuntu machine
• NVIDIA GPU Driver Version (valid for GPU only) 2080 Ti on the development machine and the native GPU on NX
• Issue Type Question
• Requirement details

  1. The development environment is in Ubuntu 20.0.4. In the Jupyter Notebook, I’m able to infer the images using the following script,

    !ssd inference --gpu_index=$GPU_INDEX
    -m $USER_EXPERIMENT_DIR/export/trt.engine
    -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt
    -i /workspace/examples/ssd/test_samples
    -o $USER_EXPERIMENT_DIR/ssd_infer_images
    -t 0.4
    

    Now, How do I write business logic on top of it by feeding images from the OpenCV? Where is this file that has got the inference code?

  2. In Jetson, I have used Deepstream to deploy. But I couldn’t find a simple implementation that shows how I could deploy the custom model on the Jetson using Python API? I have both .tlt,.etlt,.engine files . And how do I see the output as we would with OpenCV using imshow ?

What do you mean by " business logic" ?
Do you mean you want to run inference without deepstream or “tlt ssd inference”?

Yes… I want to be able to run it without Deepstream… I have an incoming stream from OpenCV. I would like to know how I can feed it to the model and perform the detection.

Refer to Why we need image pre and post processing when deploying a tlt model by using TensorRT? - #5 by Morganh and How to use tlt trained model on Jetson Nano - #3 by Morganh

But, I’m developing the application on Ubuntu 20.0.4. NVIDIA doesn’t support deep stream for Ubuntu 20.0.4. I have trained the models with Ubuntu 20.0.4. I know that the following command performs inference and displays the image output,

!ssd inference --gpu_index=$GPU_INDEX
-m $USER_EXPERIMENT_DIR/export/trt.engine
-e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt
-i /workspace/examples/ssd/test_samples
-o $USER_EXPERIMENT_DIR/ssd_infer_images
-t 0.4

But, I’m not interested in seeing some visual output. I would like to obtain coordinates so I can write other logic on top of the detected objects. How can I even do it? I have been striving for nearly a month.

Hi,
In the output folder $USER_EXPERIMENT_DIR/ssd_infer_images, there are coordinates.

I mean I want to receive it via code. For example, something like this.

bounding_boxes = model.predict(frame)

How do I do that in my Ubuntu 20.0.4 docker that I have used to train the model via TLT?

The ssd inference is not open source. It can only provide bboxes in the output folder.
For your case, suggest you to try to run inference with one of below ways:

Thanks! It would have saved my time and efforts if I knew that it was not open source previously since I wouldn’t have tried training the model with TLT. Finally, we have used the onnxruntime-gpu option. It was simple, easily works with OpenCV and most essentially it is very well documented. The performance was better than Deepstream. Considering the pain in configuring Deepstream + the closed source TLT + conversion of TRT etc…, onnxruntime is way better and straightforward.

1 Like

I was having the same issue. The fact TLT is so hard to integrate (Due to closed source) in proyects makes it impossible to use, same as the bare answers you get from Nvidia, you need to beg for better and more developed answers… Might as well give ONNX runtime a try.

Actually end users can generate trt engine based on etlt model.
If they want to run inference with deepstream, they can deploy etlt model or trt engine in it.
If they want to run inference with standalone inference way with the trt engine, end user can do some coding with correct preprocesing, postprocessing,etc.
Some examples,

( TLT different results - #11 )

( Run PeopleNet with tensorrt )

(Python run LPRNet with TensorRT show pycuda._driver.MemoryError: cuMemHostAlloc failed: out of memory - #8 by Morganh)

1 Like

Maybe this should be a new topic, but i was able to create a trt engine with tlt, but i’m getting wrong inferences, i saw another issue related to this but had no success, the pre processing that tlt does looks like a black box, i tried scaling the image with padding, normalizing and changing BGR to RGB but no success. I’m sorry, tensorrt is a great tool but i’m still stuck in this matter for a month and got no progress.

I will save you some time. Delete all deep stream-related dependencies. Train a model with Yolov4 or Yolov5. Export it to ONNX. Install onnxruntime using pip install onnxruntime-gpu and do the inference. It is way faster and less cumbersome. As a bonus, you will save time clicking all the redirect links that NVIDIA has in their forums as if it might help.

Remember you should use pip install onnxruntime-gpu. If you use pip install onnxruntime, it will be slow.

1 Like

@damiandeza
Please create a new topic if you get issue when run your own code against trt engine.
For pre processing, it is not a black box. For example, SSD network, see https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/configs/yolov4_tlt/pgie_yolov4_tlt_config.txt

net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1

That means, in TLT the preprocessing of SSD image is like below:

  • assume RGB input values which range from 0.0 to 255.0 as float
  • change from RGB to BGR
  • then subtract channels of input values by 103.939;116.779;123.68 separately for BGR channels.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.