Please provide the following information when requesting support.
• Hardware (Quadro GV100)
• Network Type (ssd)
I trained a model using the tao 5.3.0 using the ssd resnet18 backbone on the kitti dataset and pruned the model using the provided default spec files in the tao_getting_started-5.3/notbooks/tao_launcher_starter_kit/.
When I exported the trained model to the onnx format, I can not export it without the post processing node. Could you please help me, I want to run the exported model using simple onnxruntime (if possible).
Thank you, the other question that I have is why do I always have 2 times the values for the anchors? For me for example, I retrained an SSD model with resnet18 backbone. I have three outputs, conf_data, loc_data, anchor_data. I have [1,4,1,6825] for conf_data, (if I cut just after softmax), [1,6825,1,4] for loc_data if I cut just after the concat, and [1, 6825,2,4] for the anchor boxes if I cut just after the reshape and concat.
I tried to play with the configurations by changing the parameter “two_boxes_for_ar1: false” but it did not change anything. Also, could you please help me to understand the preprocessing? I have the default configurations, augmentation_config {
output_width: 256
output_height: 256
output_channel: 3
}
Does that mean, it is just resizing the images and applying the mean and std of the imagenet dataset?
I am not sure why you are going to export without post process node. After you exported to onnx file, you can use tao deploy ssd evaluate to check if it works.
As mentioned above, you can remove the last NMSDynamic_TRT node.
For yolo_v4_tiny preprocessing, please refer to link.
The reason why I want to remove the NMS part is that I want to deploy it on an MCU using my own pre and post processing after exporting. I have cut the NMS layer and after this I am left with three outputs. The sizes and details of these outputs are [1,4,1,6825] for conf_data, (if I cut just after softmax), [1,6825,1,4] for loc_data if I cut just after the concat, and [1, 6825,2,4] for the anchor boxes if I cut just after the reshape and concat. I want to check the equivalence between my python and c pre and post processing. So, that is the reason behind me trying to understand the details of the pre and post processing.
You can run inference against one test image. If inference is correct, please save the input to an .npy file.
This npy file will be the golden input for you to debug your modified onnx file.
You can use polygraphy to debug both onnx file and tensorrt engine.
For onnxruntime,
$ /usr/local/bin/polygraphy run xxx.onnx --onnxrt --data-loader-script data_loader.py --save-inputs inputs.json --onnx-outputs mark all --save-outputs onnx_output.json
For tensorrt engine,
$ /usr/local/bin/polygraphy run xxx.onnx --trt --data-loader-script data_loader.py --trt-outputs mark all --save-outputs trt_output.json --save-engine test.engine
BTW, to save an npy file.
with open('xxx.npy', 'wb') as f:
np.save(f, infer_batch)
And data_loader.py is as below.
import numpy as np
def load_data():
with open('xxx.npy', 'rb') as f:
data = np.load(f)
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks