[TensorRT] ERROR: ../rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700

Hello everyone,

I’m new in using TensorRT Python API.
Could you help me to migrate simple angle prediction model from Keras framework to TensorRT via ONNX?

Environment

TensorRT Version: 7.2.3.4
GPU Type: GeForce GTX 1060 6 GB
Nvidia Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 7.1
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2.3.1

Input data:

  • Trained ONNX model which can predict parked car rotation angle. INT64 weights
  • .png file with 3(RGB) channels, size: 64*64. Image with parked car.

Minimal target: predict car rotation angle

To be more simple I’ve used Nvidia example and reworked some stages:

Workflow:

  1. Python image preprocessing:
    shapes = np.array([i.shape[:-1] for i in x_batch]) / 300
    x_shapes = np.append(shapes, (shapes.T[0] / shapes.T[1])[:, None], axis=1)
    x_imgs = np.array([cv2.resize(i, (64, 64)) for i in x_batch])

    x_batch - list of croped images.

  2. Now I have this error trace:
    [TensorRT] ERROR: engine.cpp (780) - Cuda Error in reportTimes: 700 (an illegal memory access was encountered)
    [TensorRT] ERROR: INTERNAL_ERROR: std::exception
    [TensorRT] ERROR: engine.cpp (1036) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered)
    [TensorRT] ERROR: FAILED_EXECUTION: std::exception
    [TensorRT] ERROR: engine.cpp (169) - Cuda Error in ~ExecutionContext: 700 (an illegal memory access was encountered)
    [TensorRT] ERROR: INTERNAL_ERROR: std::exception
    [TensorRT] ERROR: Parameter check failed at: safeContext.cpp::terminateCommonContext::216, condition: cudnnDestroy(context.cudnn) failure.
    [TensorRT] ERROR: Parameter check failed at: safeContext.cpp::terminateCommonContext::221, condition: cudaEventDestroy(context.start) failure.
    [TensorRT] ERROR: Parameter check failed at: safeContext.cpp::terminateCommonContext::226, condition: cudaEventDestroy(context.stop) failure.
    [TensorRT] ERROR: …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
    terminate called after throwing an instance of ‘nvinfer1::CudaError’
    what(): std::exception
    Aborted (core dumped)

  3. Allocate buffers:
    h_input_1 = cuda.pagelocked_empty(batch_size * trt.volume(engine.get_binding_shape(0)), dtype=trt.nptype(data_type))
    h_output = cuda.pagelocked_empty(batch_size * trt.volume(engine.get_binding_shape(1)), dtype=trt.nptype(data_type))
    d_input_1 = cuda.mem_alloc(h_input_1.nbytes)
    d_output = cuda.mem_alloc(h_output.nbytes)
    stream = cuda.Stream()

  4. ONNX created with trtexec
    trtexec --explicitBatch --onnx=apm.onnx --minShapes=input:2x2 --optShapes=input:1x3x64x64 --maxShapes=input:1x3x64x64 --shapes=input:1x3x64x64 --saveEngine=apm.plan

Hi @v.stadnichuk,

Are you using below workflow:

Are you able to successfully convert keras model to TRT engine?
Could you please share the script, logs and model file in case you are using above workflow but still getting some error?

Thanks

Hi @SunilJB !
Yes, I use this method for converting keras model to ONNX.
Yes, keras model is successfully converted to TRT engine. I have errors on the inference step.
Could you provide instructions about a correct way providing scripts and model to you?
Thank you!

Hi @v.stadnichuk,
Either you can attach the file in forum or DM me. If files are too big, you can upload to drive and share the link.

Thanks