PyTorch FCN-ResNet50 --> ONNX --> TensorRT

Description

When trying to execute an ONNX model converted from

torch.hub.load('pytorch/vision:v0.10.0', 'fcn_resnet50', pretrained=True)

I get the following error

[TensorRT] ERROR: 3: [executionContext.cpp::enqueueInternal::322] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::322, condition: bindings != nullptr
)
I used the code in TensorRT/samples directory and my very basic engine is below.
Building stream
import numpy as np
import pycuda.driver as cuda
import pycuda.autoinit

input_batch = np.empty([1,3,720,1280], dtype=np.float16)
output = np.empty([1, 21, 720, 1280], dtype = np.float16)
d_input = cuda.mem_alloc(1 * input_batch.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)
bindings = [int(d_input), int(d_output)]
stream = cuda.Stream()
Building context
import tensorrt as trt

f = open("fcn-resnet50-11.trt", "rb")
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()

Prediction Part
def predict(batch):
cuda.memcpy_htod_async(d_input, batch, stream)
context.execute_async_v2(bindings, stream.handle, None)
cuda.memcpy_dtoh_async(output, d_output, stream)
stream.synchronize()
return output

I can execute resnet18_fcn.onnx taken from the jetson-inference module with the same code.
I also tried models from models/vision/object_detection_segmentation/fcn at main · onnx/models · GitHub but I can not achieve the execute properly.

Environment

TensorRT Version: 8.0.3.4
GPU Type: GTX 1660 Ti
Nvidia Driver Version: 465
CUDA Version: 11.3.1
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
PyTorch Version (if applicable): 1.10.1

Steps To Reproduce

Convert FCN-ResNet50 from torchhub and convert TRT.
Run it with engine above.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

1 Like

onnx.checker.check_model(model) returns None. I checked different ONNX files which I had succesfully run at TRT backend with onnx.checker and I got the same result. What is expected from onnx.checker.check_model?

Sample at TensorRT/quickstart/SemanticSegmentation at main · NVIDIA/TensorRT · GitHub worked for me .

1 Like