Description
Hi
I’m using a TensorRT engine to infer batch images that are received from flask request. i succesfully build engine using infer.py > TensorRTInfer function. you can see the code in bellow link
after that when i received my images i use ImageBatcher to get appropriate batches to inference TensorRT engine.
My code is as bellow
batcher = ImageBatcher(image_list, TRTengine.input_spec())
** for batch, images, scales in batcher.get_batch():*
** print(“Processing Image {} / {}”.format(batcher.image_index, batcher.num_images), end=“\r”)**
** detections = TRTengine.infer(batch, scales, nms_threshold=0.5)**
while my TRTengine.infer function is as bellow:
def infer(self, batch, scales=None, nms_threshold=None):
** outputs = **
** for shape, dtype in self.output_spec():**
** outputs.append(np.zeros(shape, dtype))**
** cuda.memcpy_htod(self.inputs[0][‘allocation’], np.ascontiguousarray(batch))**
but i receive the following error:
File “/home/user/Desktop/python/projects/app.py”, line 195, in object_detection
detections = TRTengine.infer(batch, scales, nms_threshold=0.5)
File “/home/user/Desktop/python/projects/infer.py”, line 116, in infer
cuda.memcpy_htod(self.inputs[0][‘allocation’], np.ascontiguousarray(batch))
pycuda._driver.LogicError: cuMemcpyHtoD failed: invalid device context
whats the problem?
Environment
TensorRT Version: 8.0.3
GPU Type: RTX 2080 Ti
Nvidia Driver Version: 470.57.02
CUDA Version: 11.3
CUDNN Version: –
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9