I would like to know what are the different packages that supports transfer learning toolkit generated models for deployment.
I am able to find a lot of material and support related to deepstream integration. But I couldn’t able find any other model deployment strategies.
Could you please provide information related to that.
In official released tlt user guide, it tells end users how to deploy etlt model or trt engine with deepstream.
Apart from deepstream, users can setup standalone inference way to do inference against trt engine. You can search some similar info in this forum.
@Morganh , thank you very much for your response. I have tried to find the solutions, but I couldn’t able to find any material. Could you please share the details from your end.
There are some materials, odds and ends.
Hi. I’m trying to inference pruned peoplenet model using tensor rt, but always get zero coverage output. So, i downloaded peoplenet tlt using command from this topic: How to run tlt-converter
After that i converted .eltl to engine using next command:
./tlt-converter /home/bronstein/tlt-experiments/resnet34_peoplenet_pruned.etlt -k tlt_encode -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,544,960 -i nchw -e /home/bronstein/tlt-experiments/engine/peoplenet.engine -m 1 -t fp16
Then i tried t…
I’m doing the inference with yolov3 tensorrt engine converted by tlt-converter, however I found that the inference result from tensorrt engine and that form tlt-infer are different. I think that might due to differences during pre-processing stage. Since I could not get access to the pre-processing part of tlt-infer, I’ve attached below that part for my tensorrt engine:
frame = cv2.imread(img_path)
reso = (416, 416)
ratio_h0, retio_w0 = 416 / frame.shape[0], 416 / frame.shape[1]
frame = cv2…
Hello,
I am using Jetson Nano with Jetpack 4.4(recommended in forum) and I want to use Nvidia’s purpose built model Peoplenet in my own application (not deepstream).
For clarity, I downloaded the peoplenet’s pruned model from ngc container peoplenet pruned model and used TLT converter downloaded for Jetson and converted the etlt file to TRT engine successfully. I am able to use this generated engine file in deepstream-app, but I don’t find any references how to use it in my custom application. …
We are trying to run TrafficCamNet pruned model with tensorrt without deepstream.
First, I used tlt-converter command adapted from here to generate a TensorRT engine
tlt-converter resnet18_trafficcamnet_pruned.etlt
-k tlt_encode
-c trafficnet_int8.txt
-o output_cov/Sigmoid,output_bbox/BiasAdd
-d 3,544,960
-i nchw
-e trafficnet_int8.engine
-m 1 -t int8 -b 1
Then, I run the following code
import cv2
import pycuda.autoinit # This is needed for initializing CUDA driver
impor…
Hello,
We are trying to run Peoplenet pruned model with tensorrt 7 without deepstream.
Where can we find a good post-processing code for detectnet model?
We have tried some post processing functions:
I am using TLT pre-trained model DashCamNet cache file (the weights and config file from DeepStream SDK) for inference using TensorRT. The issue here is the bounding boxes are off from the regions of interest.
Here’s the code snippet that I am using for preprocessing and parsing the output:
float net_scale_factor = 0.0039215697906911373;
cv::cvtColor(frame,frame,CV_BGR2RGB);
int kINPUT_C = 3, mHeight = 544, mWidth = 960;
uint8_t buffer[kINPUT_C * mHeight * mWidth];
int nCols = mWidth* kINPU…
Hi all,
I want to use peoplenet and i want to train with my own custom-dataset. After training how to use this model for my custom-pipeline.
Please suggest me how to achieve this.
Hi all,
I have some question about train the resnet18detectnetv2 for custom my own dataset.
1- I need to converted my own dataset to kitti dataset format? If so, I need first resize images and boxes for fixed size like kitti dataset size as offline?
2- for converting the dataset to tfrecord, tlt-tfrecord-converter expected to have 16 fields in labels text files, I should to fill other fields any value like zeros except x,y,w,h and class id?
3- In the detectnet_v2_train_resnet18_kitti.txt, t…
How to do inference of trt engine trained using TLTK in python without deepstream?
Let’s say I have trained DetectNet_v2 + ResNet50 using TLTK.
Thanks
I used Transfer Learning Toolkit to train resnet18detectnetv2 for custom dataset. I then used the tlt-converter to convert the .etlt model into an .engine file. I am able to deploy both the .etlt and .engine file in DeepStream and it works.
But now I need to deploy the model in python and I am not able to find how to load the .engine model in python.
Hi, I trained an object-detection model in TLT following examples/detectnet_v2 ipynb. Train & prune & inference & export work well and I got .tlt/.etlt file. How to use these files on Jetson Nano without Deepstream? Is there a python solution likes detectnet_console.py in jetson-inference? Thanks.
@Morganh , Thank you for the details. I will follow these sources.
This is really helpful.