TensorRT invalid context during runtime

Hi together,

I hope you can help me out here. I am using the release candidate of TensorRT 3 with its fresh TensorFlow support.

I train the model using TensorFlow, create an UFF model and then create a TensorRT engine. After creating that engine, I make a forward pass with one test entry. Until here, everything works as expected.

Note: The function I use for the forward pass is exactly the same as later in runtime. It’s all written in Python.

When I now want to use the TensorRT engine in my software during runtime, I get an error for the wrong context. The only difference to the training is, that I load the TensorRT engine from a file (that was stored after training) and don’t create it beforehand.

I guess, that during the conversions and the creation of the engine, it creates the right contexts in the backend.

I am getting the following error:

File “…/trt_engine.py”, line 47, in classify_candidate_patches_trt
batch_size * FRONT_VIEW_INPUT_SIZE * size_float)
LogicError: cuMemAlloc failed: invalid device context

That’s the line (it is the first mem_alloc in that function):

d_input_front = cuda.mem_alloc(
            batch_size * FRONT_VIEW_INPUT_SIZE * size_float)

Now, when I introduce to that function the following line, I’ll get an error when I want to copy the data back from device to host. Again, it is working right in the training procedures.

ctx = make_default_context()

I also tried to insert in the runtime the pyduda.autoinit

Any guesses?

Thank you very much in advance