Please provide complete information as applicable to your setup.
• Hardware Platform Jetson Xavier • DeepStream Version 6.3.0 • JetPack Version 5.1.2
**• TensorRT Version 8.5.2.2 **
• Issue Type:questions/bugs
Hi,
I try accelerate Deep-edge detection HED - (Holistically-Nested Edge) with TensorRT on my jetson Xavier. I use Pytorch implementation from pytorch-hed extract the model to onnx with constant input=1x3x320x320 and output=1x1x320x320, useing trtexec command line extract engine file. Here is the command: trtexec --onnx=HED.onnx --saveEngine=HED.engine --fp16 --noDataTransfers --useSpinWait.
At inference time I get my results shifted to the bottom right although in Pytorch implementation I don’t get this problem, I know in old Caffe model with Opencv implementation they add CropLayer but I not understand why I get shifted output edge image because I using Pytorch implementation. I would glad to help how I can fix this problem.
But Now when I try run cpp inference I get the following error although I specify maxBatch=1
Engine loaded from /home/uvision/HED_TRT_PROJECT/aux_folder/HED_Caffe.engine
3: [executionContext.cpp::enqueueV3::2381] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueV3::2381, condition: !mEngine.hasImplicitBatchDimension(). EnqueueV3 is not supported for network created with implicit batch dimension.
)
and I get black image When I use the onnx file I succussed to output image just with a shift I will be glad to your onion.
In General I not sure the problem is with the onnx file, attaching results with my code and onnx filde (last time),: