Tensorflow model to UFF Error - dtype <class 'numpy.uint8'> unknown

Upon running

uff_model = uff.from_tensorflow_frozen_model("frozen_model.pb", ["output"])

I get the following error:

Warning: keep_dims is not supported, ignoring...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 77, in from_tensorflow
    uff_metagraph_proto = uff_metagraph.to_uff()
  File "/usr/lib/python3.5/dist-packages/uff/model/meta_graph.py", line 39, in to_uff
    graphs=[graph.to_uff(debug) for graph in self.graphs],
  File "/usr/lib/python3.5/dist-packages/uff/model/meta_graph.py", line 39, in <listcomp>
    graphs=[graph.to_uff(debug) for graph in self.graphs],
  File "/usr/lib/python3.5/dist-packages/uff/model/graph.py", line 26, in to_uff
    graph = uff_pb.Graph(id=self.name, nodes=self._check_graph_and_get_nodes())
  File "/usr/lib/python3.5/dist-packages/uff/model/graph.py", line 46, in _check_graph_and_get_nodes
    raise extend_with_original_traceback(e, node._trace)
  File "/usr/lib/python3.5/dist-packages/uff/model/graph.py", line 44, in _check_graph_and_get_nodes
    nodes.append(self._check_and_get_node(node))
  File "/usr/lib/python3.5/dist-packages/uff/model/graph.py", line 33, in _check_and_get_node
    node = node.to_uff()
  File "/usr/lib/python3.5/dist-packages/uff/model/node.py", line 41, in to_uff
    fields=self._convert_fields(self.fields, debug),
  File "/usr/lib/python3.5/dist-packages/uff/model/node.py", line 30, in _convert_fields
    ret_fields[k] = create_data(v)
  File "/usr/lib/python3.5/dist-packages/uff/model/data.py", line 104, in create_data
    return uff_pb.Data(dtype=_create_dtype(elt))
  File "/usr/lib/python3.5/dist-packages/uff/model/data.py", line 59, in _create_dtype
    raise UffException("dtype {} unknown".format(dtype))
uff.model.exceptions.UffException: dtype <class 'numpy.uint8'> unknown

Is there any fix?

Hi,

Looks like your model contains some non-supported layers of TensorRT.

Please check our document for supported layer in detail:
UFF parser: [url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
TensorRT engine: [url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Thanks.

I have managed to change the layers that weren’t supported by TensorRT. Now I am facing the following error:

Using output node generate_output/deprocess/add
Converting to UFF graph
No. nodes: 361
UFF Output written to data/tmp.uff
UFFParser: parsing generate_output/generator/encoder_9/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_8/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_7/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_6/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_5/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_4/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_3/lrelu/mul/x
UFFParser: parsing generate_output/generator/encoder_2/lrelu/mul/x
UFFParser: parsing image_tensor
UFFParser: parsing generate_output/load_images/preprocess/mul/y
UFFParser: parsing generate_output/load_images/preprocess/mul
UFFParser: parsing generate_output/load_images/preprocess/sub/y
UFFParser: parsing generate_output/load_images/preprocess/sub
UFFParser: parsing generate_output/input_images/Reshape/shape
UFFParser: parsing generate_output/input_images/Reshape
UFFParser: parsing generator/encoder_1/conv/filter
UFFParser: parsing generate_output/generator/encoder_1/conv/Conv2D
UFFParser: parsing generate_output/generator/encoder_2/lrelu/mul
UFFParser: parsing generate_output/generator/encoder_2/lrelu/mul_1/x
UFFParser: parsing generate_output/generator/encoder_2/lrelu/Abs
UFFParser: Parser error: generate_output/generator/encoder_2/lrelu/Abs: Unary not supported for other non-constant node
Failed to parse UFF

From what I understand, the function abs is also not supported. I am attempting to replace abs with other supported Tensorflow functions.
numpy:

mask = (x<0).astype(np.float32)
mask = 2*mask + 1 #(makes all 1s -1, and all 0s 1)
abs_vals = mask*x

Tensorflow: z is a tensor of zeros, the same shape as x

mask = tf.less(x, z)
#convert mask from boolean to integer or float

I am unsure how to proceed because I need to convert the boolean mask that tensorflow returns to an integer mask, in order to multiply it to the input, but the Cast operation is also not supported by TensorRT. Any help with this will be very much appreciated.

The operations that are supported by TenosrRT arent supported by the UFF parser. How is anything going to work. I am currently facing an error because of subtracting 2 tensors. How can I replace that with anything?

Unsupported operation _Neg

Hi,

We have released TensorRT 4 for host few weeks ago.
It’s recommended to give it a try first.
https://developer.nvidia.com/nvidia-tensorrt-download

You can find the detail supported layers here:
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

But please remember this package is not available for TX2 yet.
Thanks.