I was successfully able to convert my darknet yolov3 model to tensorrt and i also was able to run the prediction once. But when i ran it again it giving this error.
i used example from
/usr/src/tensorrt/samples/python/yolov3_onnx/
Since my darknet is custom with only 1 number of class there are 18 filters and input shape of 416
So i changed output_shapes = [(1, 18, 13, 13), (1, 18, 26, 26], (1, 18, 52, 52)]
Traceback (most recent call last):
File "onnx_to_tensorrt.py", line 190, in <module>
main()
File "onnx_to_tensorrt.py", line 166, in main
trt_outputs = common.do_inference_v2(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)
File "/home/experio/Documents/yolov3_onnx/common.py", line 191, in do_inference_v2
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
File "/home/experio/Documents/yolov3_onnx/common.py", line 191, in <listcomp>
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
pycuda._driver.LogicError: cuMemcpyHtoDAsync failed: invalid argument
the error is in common file:
def do_inference_v2(context, bindings, inputs, outputs, stream):
# Transfer input data to the GPU.
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]