The tensorrt engine input type is numpt.float32, but we give it an int array. In our code, we changed np.ones((1,1024, 1024, 3)) to np.ones((1, 1024, 1024, 3)).astype(np.float32)
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Error in TensorRT python yolo2onnx sample , inference appears ' pycuda._driver.LogicError: cuMemcpyHtoDAsync failed: invalid argument' | 2 | 1163 | October 12, 2021 | |
Cuda Error in executeMemcpy: 1 (invalid argument) | 5 | 2238 | October 13, 2021 | |
Error occurred while running the Tensorrt samples: [reformat.cpp::executeCutensor::385] | 3 | 1193 | December 12, 2023 | |
Onnx to trt and use int8 for inference, with batchsize=8. Got ERROR:genericReformat.cu (1262) | 2 | 563 | May 5, 2021 | |
TensorRT error : cuMemcpyDtoHAsync failed: an illegal memory access was encountered | 2 | 2172 | April 7, 2021 | |
Looking for real fix for invalid resource handle error | 7 | 1501 | July 28, 2021 | |
TensorRT inference out of memory error | 1 | 2242 | September 14, 2021 | |
Error during inference | 7 | 968 | October 12, 2021 | |
How to use different profile in tensorrt? | 3 | 1385 | July 19, 2022 | |
Cuda Runtime Error when infering Onnx model | 3 | 960 | October 11, 2021 |