But there is something I don’t understand and I didn’t find the answer clearly…
In this doc, it seems that I can convert my tensorflow model to an uff file directly. But this function is only available on ubuntu x86 distribution. Moreover, install tensorflow on such distribution is very painful…
So, I need to create a frozen model from my trained model (on a x64 OS), then export the file to a x86 Ubuntu in order to convert it into an uff file and then a tensorRT engine, and finally, export it to my TX2 to execute it?
Or is it possible to simply use uff.from_tensorflow() on the TX2 directly?
Where tf_node is a “Sub OP”, and has only one attribute(no ‘dtype’ attribute)
T: {"type":"DT_FLOAT"}
BTW, Its upper frame is inside function “convert_transpose()”
b) And after I removed those OPs(whole post-process is removed), it can be generate uff successfully. But then reported an error of “Unsupported operation _ExpandDims” while loading the uff
It worked pretty well, with these files, I could generate uff files from both Python API and C++ API (on the TX2 directly).
The only bug was that sometimes, the uff file was generated successfully but the resulting inference engine was unable to perform properly, a regeneration of the uff file was sufficient to fix it.