Convert tensorflow frozen graph to uff

Hi,

I have trained a tensorflow DL network and would like to use it to run inference. To do that, I need to convert the tensorflow checkpoint to uff. I did the following:

  1. I trained my network on a GPU workstation
  2. I have converted the network tensorflow checkpoint to frozen graph on the GPU workstation
  3. I then transfer the frozen graph to another GPU workstation which has uff installed.
  4. On the second GPU workstation, I printed out the node names of all nodes in the frozen graph. I saw the output nodes I needed.
  5. I then use the following code to convert my frozen graph pb to uff:
import uff

frozen_filename ='./model-native.pb'
output_node_names = ['my_net/my_node']
output_uff_filename = './model.uff'
uff_mode = uff.from_tensorflow_frozen_model(frozen_filename, output_nodes=output_node_names, output_filename=output_uff_filename, text=False)

When I ran this, it complained that the node was not in the graph

my_net/my_node was not found in the graph. Please use the -l option to list nodes in the graph.

Why?

Anyone knows?

Hi,

May I know the operation type of your output layer?

If the output layer is an non-supported or skipped op (ex. identity), uff parser may miss to find it.
Would you mind to check it for us first?

Thanks.

Hi @AastaLLL, thanks.

My network is Lanenet GitHub - MaybeShewill-CV/lanenet-lane-detection: Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/

The model was downloaded from Dropbox - File Deleted.

My output nodes are:
lanenet_model/vgg_backend/instance_seg/pix_embedding_conv/pix_embedding_conv
lanenet_model/vgg_backend/binary_seg/ArgMax

I am not sure the type of the output node, but you can find more information such as the graph here What is the output_node_names · Issue #275 · MaybeShewill-CV/lanenet-lane-detection · GitHub

Hi,

The file you shared is not a frozen file.
Would you mind to share the .pb file with us?

Thanks.

The file is http://artlystyles.com/tmp/model-native.pb

@AastaLLL Could you please take a look of my pb file and if it can be converted to uff?

Thank you very much

I found the node type:

“ArgMax” for lanenet_model/vgg_backend/binary_seg/ArgMax, and
“Identity” for lanenet_model/vgg_backend/instance_seg/pix_embedding_conv/pix_embedding_conv

Hi,

Sorry for the late update.

The information you shared in comment#8 is pretty helpful.
One of you output layer type is “Identity” which will be skipped when generating uff model.

Here are two possible solution for your reference:
1. Remove the layer directly. Since it is an Identity operation, the result won’t be changed.
2. Mark the layer right before Identity to be the output.

Thanks.

I have used the node in front of Identity. However, I found there is another output node in my network which is ArgMax, which is not supported my Tensorrt. Any opensource code implement this node in Tensorrt?

Hi,

Sorry for the late update.

TensorRT add ArgMax support from v5.1.2.
[url]Release Notes :: NVIDIA Deep Learning TensorRT Documentation
So you can upgrade the system with JetPack 4.2.1 or above to get the support from TensorRT directly.

Thanks.

I was able to successfully convert lanenet model to uff. However, I do not how to use it to run inference. I have been reading Jetson-Inference on github, but still do not know how to write the C++ code. What I want to process a live video feed and for each frame, output a instance image and a binary image.

Hi @AastaLLL:

I was able to write some code to convert Lanenet Tensorflow check point with ArgMax to UFF and load it with tensorrt c++. However, when I ran the program, I got the following error:

[E] [TRT] UffParser: Parser error: lanenet_model/vgg_backend/binary_seg/ArgMax: Reductions cannot be applied to the batch dimension.
[E] Failure while parsing UFF file

Could you please let me know how to fix it?

Hi, @AutoCar:
I got the same error, have you fixed it? how?

Hi,@AutoCar:

I fixed this problem by dropping ArgMax when converting pb to uff.
As the error says, “Reduction cannot be applied to the batch dimension”. It might be a misuse or a bug of “dimension = -1”.

@jockeypan: thanks. Are you working on Lanenet (GitHub - MaybeShewill-CV/lanenet-lane-detection: Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/) or other network?

So now your tensorrt output the is the original input of ArgMax, do you still need ArgMax for your application? DO you plan to implement it outside of Tensorflow?

@AutoCar, no need to implement it outside of TensorFlow. Avoid using “dimension = -1”, in your case “dimension = 1”.
PS: TensorFlow uses NHWC instead of NCHW, you might need to reorder you image input if you want to compare the output features.

What do you mean by “drop ArgMax”?

Don’t worry about “drop ArgMax”, no need for that. You just have to specify the dimension when freezing your model to .pb file.

@jockeypan my code to freezing the model is very simle. There is nothing related to dimensions in the 3 line code. How do you specify dimension = 1?