Thanks for your reply.
I have exactly referred this article by NVIDIA to first convert .pb weights to .onnx format and then .onnx to .plan/.engine to build engine and Tensorrt inference.
My model is basically violence detection where labels is binary 0 or 1(i.e. if there is fight or non-fight after analyzing some frames i.e. 20 frames during inference and predict output probabilities using softmax function) but when I inference with .plan/.engine file it gives predictions for just class 0 means non fight always even some fight frames are there in testing video.
So I was wondered that how can it possible that during tensorrt model will not give good results as I was getting with keras. That’s why raised this issue in forums.
Can you help me now? That would be really appreciable as this model itself is necessary to deploy in production on NVIDIA jetson nano so I need to pass test cases during testing phase before deploying this to production.