Hello,
I’ve recently trained an action recognition model in tao launcher kit sample notebook using custom data.I’ve successfully exported the model to onnx format using this command:
but I got the following error while parsing the model layers :
[08/30/2023-09:30:56] [V] [TRT] Registering layer: /If_OutputLayer for ONNX node: /If_OutputLayer
[08/30/2023-09:30:56] [E] Error[4]: /If_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape.
I checked the onnx model using the netron app and I found that “then” & “else” branches of the “if” node have different shapes which is currently not supported in tensorrt,I find this error quite strange since I’ve followed Action Recognition documentation as it is and still got this error.
I tried to work around this error by using
onnxsim
which simplified the"If" node but the simplified model results are different from the .tlt version and it is no longer reliable.Any suggestions on how I could solve this?
• Hardware: RTX 3060 Driver version 535.86
• Network Type: ActionRecognitionNet
• TLT Version: TAO 5.0.0-deploy
• TensorRT version: 8.5.3 + CUDA 12
I did try the fp16 precision and still got the same error,the problem seems to be with the model parsing,tensorrt doesn’t support the “If” node having different shapes in its branches. So it is related to the model architecture which is different from the original pretrained model used in the actionrecognitionnet example.
Thank you for your reply @Morganh , I’ve tried this solution and I’ve successfully generated the engine. It seems that the approach I’ve taken was the right one
The simplified model had nothing to do with the unreliable results, it was the preprocessing configuration in deepstream that affected the output.I’ve tried inferencing directly with the engine in a script and the results were similar to “.tlt” version.
Also, can you confirm that simplifying the onnx model doesn’t affect its performance?
Thank you for your help.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Above-mentioned solution from my last comment is to solve export regression issue due to pytorch upgrade. The squeeze operation in torch leading to If node in ONNX model which cannot be parsed correctly by tensorrt.
For performance, you can run trtexec to check the fps.