This is a common error when TensorRT version during building is different from the TensorRT version during inference.
There should be no issue when you run tao deploy unet gen_trt_engin to generate tensort engine and run tao deploy unet evaluate to evaluate this tensort engine, right?
I understand, but how do I get the correct TensorRT and CUDA versions?
Correct. Within the deploy container all works well, but I need to use under C++. Either I can install the toolchain versions that are compatible, but unknown, or use another method to convert the model to tensrrt as there was in tao 3
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Yes, you can check the TRT version where you generate the engine. Then make sure when you run inference, the TRT version is the same as it.