Downloaded onnx file from ppocr-onnx/ppocronnx/model at main · triwinds/ppocr-onnx · GitHub and managed to build TRT engine with below command
/usr/src/tensorrt/bin/trtexec --onnx=ch_PP-OCRv3_det_infer.onnx --saveEngine=a.int8.engine --int8 --minShapes=x:1x3x16x16 --optShapes=x:5x3x32x32 --maxShapes=x:10x3x640x420
Trying to run with this pipeline: gst-launch-1.0 uridecodebin uri=file:///home/nvidia/2.mp4 ! mx.sink_0 nvstreammux width=1280 height=720 batch-size=1 name=mx ! nvinfer config-file-path=./ppocr.txt ! nvvideoconvert ! nvdsosd process-mode=1 display-text=1 ! nvegltransform ! nveglglessink
ppocr.txt (546 Bytes)
But I am not seeing any text detected with the pipeline above. Missing anything? My end use case is to use this as secondary back to back detector. But I couldnt even make this work as my primary detector first for some reason.
• Hardware Platform (Jetson / GPU): Xavier AGX
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only) : 5.1
• TensorRT Version : 8.5
• Issue Type( questions, new requirements, bugs) : question