An illegal memory access was encountered using PyCUDA and TensorRT

I used TensorRT and Pycuda in python code. In the following inference code, there is an illegal memory access was encountered happened at stream.synchronize().

def infer(engine, x, batch_size, context):  
    inputs = []
    outputs = []
    bindings = []
    stream = cuda.Stream()
    for binding in engine:
        size = trt.volume(engine.get_binding_shape(binding)) * batch_size
        dtype = trt.nptype(engine.get_binding_dtype(binding))
        # Allocate host and device buffers
        host_mem = cuda.pagelocked_empty(size, dtype)
        device_mem = cuda.mem_alloc(host_mem.nbytes)
        # Append the device buffer to device bindings.
        # Append to the appropriate list.
        if engine.binding_is_input(binding):
            inputs.append(HostDeviceMem(host_mem, device_mem))
            outputs.append(HostDeviceMem(host_mem, device_mem))
    img = np.array(x).ravel()
    np.copyto(inputs[0].host, 1.0 - img / 255.0)  
    [cuda.memcpy_htod_async(inp.device,, stream) for inp in inputs]
    context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)    
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(, out.device, stream) for out in outputs]
    # Synchronize the stream
    # Return only the host outputs.

    return [ for out in outputs]

What could be wrong?

My program is combination of Tensorflow and TensorRT codes. The error happened only when I run

self.graph = tf.get_default_graph()
self.persistent_sess = tf.Session(graph=self.graph, config=tf_config)

before running infer(). If I don’t run the above two lines, I have no issue.