I’m understood that Linux version of TRT comes with some uff parser for frozen graph from python.
I also read that on Windows that parsing has to be done on C++ (tell me if I’m wrong), but I couldn’t find any documentation on that.
So my question is how can I do that? I can’t use Linux neither any way on the internet (working is done on isolate network).
currently, the TRT workflow in Windows only supports C++ API. Other than that, there should be no “windows”-specific dependencies beyond that. Feel free to download the TRT Windows tar package and tryout the included samples.
For working C++ API, reference https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#c_topics
NVIDIA Enterprise Support
in the doc, its says
“You can use the following sample code to convert the .pb frozen graph to .uff format file.”
convert-to-uff input_file [-o output_file] [-O output_node]
but its a python sample, so how can I do this in Windows with C++ API?
Thanks for your answer.
I want to use the TRT’s C++ API, but do use it I need to import a trained tensorflow model so I need to parse an frozen model (.pb file) to UFF but the UFF convertor doesn’t exist in the TRT zip file for windows (and tar package is only for Linux so I don’t understood what you mean by the recommendation to download it), so what is the way to convert .pb frozen model to UFF format in order to load the UFF file to the C++ TRT API?
I meant .zip when I mentioned .tar file for Windows. My apologies for the confusion.
Please reference Importing A TensorFlow Model Using The C++ UFF Parser API:
In the document that you refer to in your last comment say to use that parser->parse method.
After looking again on sample that cones with the TRT zip file I think that you misunderstood my question because on the sample_uff_mnist the input to the c++ uff parser is an UFF file!. I don’t have that uff file’ I have an forzen graph in .pb file, so how I’m suppose to parse it to uff file in c++ API?
here is the code from TRT sample:
gUseDLACore = samplesCommon::parseDLA(argc, argv);
auto fileName = locateFile(“lenet5.uff”);
std::cout << fileName << std::endl;
int maxBatchSize = 1; auto parser = createUffParser(); /* Register tensorflow input */ parser->registerInput("in", Dims3(1, 28, 28), UffInputOrder::kNCHW); parser->registerOutput("out"); ICudaEngine* engine = loadModelAndCreateEngine(fileName.c_str(), maxBatchSize, parser);
and the loadModelAndCreateEngine function use the parser->parse with the lenet5.uff file as input.
Another thing, after checking all the README files of the uff samples that comes with the TRT zip file, I can say that all of them telling to convert the frozen graph from tensorflow with an convert-to-uff.py file, but this file is not available in the windows TRT zip file., and in the release notes says:
"the Python script convert-to-uff is not packaged
within the .zip. You can generate the required .uff file on a Linux machine
and copy it over in order to run sample_uff_ssd.’
So if I don’t have Linux machine how I suppose to use TRT with tensorflow model at all?
so just to be clear, to use a tensorflow model with TRT
we need to
- save a trained tensorflow model as frozen graph. (.pb)
- convert .pb to .uff
- parse uff with C++ API
- run inference
and currently (TRT-18.104.22.168) we can not do step2 on Windows, am i correct?
I’m also working on a windows machine and what I did is downloaded a Linux TRT tar file, and there you have the uff package and you can simply use it on your windows machine. just to be clear - use the uff you got from the Linux tar with the windows TRT you already installed.
worked for me (well, to some extent, since my model has some layers that uff does not yet support)…
I also try this approach but couldn’t open the .tar file, how you manage to do that?
And you have an idea 'hy this file is not included in the .zip file?
what do you mean couldn’t open the tar file? it’s just a compressed file… I believe winzip should be able to handle this… just lookup “opening a tar file on windows” and I believe you’ll find a solution…
as to your second question, i have no idea… i’m not an Nvidia rep…