Hi all,
In the concept of Tensorflow framework, If we generated the .pb file, we can easily run .pb model with importing of .pb file and then specific the inputs/outputs names in the last step run the model and this structural is general for all .pb models, I want to know, for running the .engine models(generated with TLT), Is there a right way like above for doing inference with TensorRT? If so, please put some python codes for this problem.
User can do inference with TRT engine.
See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation
and Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation
Output blob names:
For classification : predictions/Softmax
For DetectNet_v2: output_bbox/BiasAdd,output_cov/Sigmoid
For FasterRCNN: dense_class_td/Softmax,dense_regress_td/BiasAdd, proposal
For SSD, DSSD, RetinaNet: NMS
For YOLOv3: BatchedNMS
Input blob names:
For classification:input_1
For DetectNet_v2: input_1
For FasterRCNN: input_image
For SSD, DSSD, RetinaNet: Input
For YOLOv3: Input