I’m trying to import a
.uff file into opencv.
Unfortunately I can’t find a solution and therefore I wanted to ask you if it is possible to convert the
.uff file into
.onnx (as this last one can be imported into opencv).
I ask you this because I would like to use the pre-trained models (located in the “data” folder in the jetson inference) on other architectures.
.uff is an intermediate format between TensorFlow to TensorRT.
Do you want to inference the model via TensorRT and use OpenCV as input interface?
If yes, you can find an example to deploy an .uff model with TensorRT below:
Thanks for the reply.
Precisely I was looking for a method like “
readNetFromONNX()” but for the .uff format in opencv.
I don’t know if there was such a method but what interests me is being able to use a .uff model on machines without NVIDIA graphics card (directly on the CPU). That’s why I was wondering if there was a method capable of converting this format.
.uff is an intemediate format for TensorRT.
Since TensorRT is optimized based on our hardware architecture, it cannot run on other processors.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.