Unable to deploy tensorflow image classification model

Hello guys,
I retrained a image classification model using tensorflow for poets(https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/)
The model i used is (https://github.com/aswinkumar2019/CNN-model-for-rupee-notes-except-2000rs-/retrained_graph.pb)
I converted it into uff format.The converted model is (https://github.com/aswinkumar2019/CNN-model-for-rupee-notes-except-2000rs-/model.uff)

When i try to infer the model using sample.py i get the error as shown in screenshot(https://github.com/aswinkumar2019/CNN-model-for-rupee-notes-except-2000rs-/send.png)

[TensorRT] ERROR: UffParser: Unsupported number of graph 0
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “sample.py”, line 112, in
main()
File “sample.py”, line 98, in main
with build_engine(model_file) as engine:
AttributeError: enter

Is model retrained using tensorflow for poets not supported in tensorrt???
Please help me.
Regards,
Aswin Kumar

Hi,

Thanks for you post.

Somehow we cannot open the link you shared above.
Would you mind to check it for us?

Thanks.

https://github.com/aswinkumar2019/CNN-model-for-rupee-notes-except-2000rs-/blob/master/retrained_graph.pb

https://github.com/aswinkumar2019/CNN-model-for-rupee-notes-except-2000rs-/blob/master/send.png

Sorry guys these are the updated links

https://github.com/aswinkumar2019/CNN-model-for-rupee-notes-except-2000rs-/blob/master/model.uff

This is the converted uff file

Hello guys,
I thought that the sample.py file found in /usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist folder can be used with command ( python sample.py -d ./model.uff) to use our own tensorflow image classification model(model.uff) converted from the retrained_graph.pb file got from tensorflow for poets((https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/)

Am i right or,i am doing something wrong???
Or,i should use nvidia DIGITS to train the model to be deployed in jetson devices???
If i am wrong,please give instructions to follow to train the model
Please help me
Regards,
Aswin Kumar

Guys any update regarding the issue?
I am waiting for your response

Hi,

Sorry for the late update.
This issue is caused by a non-supported layer.

We try to run your uff model with this sample:

import os
import ctypes
import tensorrt as trt

uff_filename = 'model.uff'
trt_filename = 'model.trt'

input_name  = 'Input'
output_name = 'MarkOutput_0'
input_dim   = [3, 224, 224]


# initialize
TRT_LOGGER = trt.Logger(trt.Logger.INFO)
trt.init_libnvinfer_plugins(TRT_LOGGER, '')
runtime = trt.Runtime(TRT_LOGGER)


if not os.path.isfile(trt_filename):
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
        builder.max_workspace_size = 1 << 28
        builder.max_batch_size = 1
        builder.fp16_mode = True

        parser.register_input(input_name, input_dim)
        parser.register_output(output_name)
        parser.parse(uff_filename, network)
        engine = builder.build_cuda_engine(network)

        buf = engine.serialize()
        with open(model.TRTbin, 'wb') as f:
            f.write(buf)
[TensorRT] ERROR: UffParser: Validator error: input_1/BottleneckInputPlaceholder: Unsupported operation _PlaceholderWithDefault
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
  File "inference.py", line 30, in <module>
    buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'

Is it possible to replace the PlaceholderWithDefault op with some supported operation listed here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html#supported-ops

Thanks.