I want to import .tlt model to my python code without deepStream. I want to use the model in my code and I don’t need deepStream. Is there some sample code I can see?
Reference: How to use tlt trained model on Jetson Nano
I have known how to convert my .tlt model to tensortRT .engine files. And I can get predict tensor according SSD_Model example . Now I want to know how to parser my bboxes. Where can I get the parser process of detectnet_v2? Can you give me some sample for parser detectnet_v2? I use peoplenet for testing now.
I have known the output tesnor’s information, but I don’t know how to convert back to its real coordinate.
- output_cov/Sigmoid : A [batchSize, Class_Num, gridcell_h, gridcell_w] tensor contains the number of gridcells that are covered by an object
- output_bbox/BiasAdd : a [batchSize, Class_Num, 4] contains the normalized image coordinates of the object (x1, y1) top left and (x2, y2) bottom right with respect to the grid cell
output_conv/Sigmoid: is the bboxes score of each grid score_array= (batch_size, class_count, gridH, gridW) output_bbox/BiasAdd bbox_array= (batch_size, class_count*4, gridH, gridW)
I have solve my problem from this link Run PeopleNet with tensorrt .