I want run a trained Keras model on a Jetson TX2 board within a C++ application. After my research on the internet my approach is to export the model to a pb file and convert it to a uff file with convert_to_uff.py. For the export to a pb file I found this guide which uses the function freeze_grah.py from tensorflow. Freezing a Keras model. How to freeze a model for serving and… | by Joseph Aylett-Bullock | Towards Data Science
My first question, is this right way to bring a keras model to TensorRT? Because the script convert_to_uff.py works only with Tensorflow 1.15.
I tried to test this workflow with a small model which have two layers:
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1]),
tf.keras.layers.Dense(units=1, name = “output_node”)
])
The conversion process works and I get a uff file. But I get the following warnings:
Warning: No conversion function registered for layer: QueueDequeueUpToV2 yet.
Converting random_shuffle_queue_DequeueUpTo as custom op: QueueDequeueUpToV2
Warning: No conversion function registered for layer: RandomShuffleQueueV2 yet.
Converting enqueue_input/random_shuffle_queue as custom op: RandomShuffleQueueV2
DEBUG [/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py:96] Marking [‘output_node/BiasAdd’] as outputs
No. nodes: 13
UFF Output written to frozen_model.uff
If I try to parse the uff file in C++ with:
parser->registerInput(“dense/Cast”, nvinfer1::Dims2(1, 1), nvuffparser::UffInputOrder::kNCHW
parser->registerOutput(“output_node/BiasAdd”)
parser->parse(“frozen_model.uff”, *network, nvinfer1::DataType::kFLOAT)
then I get the error message:
[TRT] UffParser: Validator error: random_shuffle_queue_DequeueUpTo: Unsupported operation _QueueDequeueUpToV2
and therefore I can’t create a network:
[05/14/2020-16:35:37] [E] [TRT] Network must have at least one output
[05/14/2020-16:35:37] [E] [TRT] Network validation failed.
Can anyone help me, what I have to do that it works. I would be very grateful for any help