Name Error:name 'UffException' is not defined

Reply

Avatar
Mir Akhtar Ali
4 hours ago
Using TensorFlow backend.
2018-09-05 18:27:17.202041: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: FMA
2018-09-05 18:27:17.336380: W tensorflow/stream_executor/cuda/cuda_driver.cc:513] A non-primary context 0x60fa250 for device 0 exists before initializing the StreamExecutor. The primary context is now 0x60cc960. We haven’t verified StreamExecutor works with that.
2018-09-05 18:27:17.337269: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.93GiB freeMemory: 7.70GiB
2018-09-05 18:27:17.337304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2018-09-05 18:27:17.991676: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-05 18:27:17.991732: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0
2018-09-05 18:27:17.991747: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N
2018-09-05 18:27:17.991999: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7408 MB memory) → physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
model_data/yolo.h5 model, anchors, and classes loaded.
Using output node dense_2/Softmax
Converting to UFF graph
Traceback (most recent call last):
File “demo.py”, line 193, in
main(YOLO())
File “demo.py”, line 43, in main
uff_model = uff.from_tensorflow_frozen_model(“mars-small128.pb”, [“dense_2/Softmax”])
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 120, in from_tensorflow
name=“main”)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 76, in convert_tf2uff_graph
uff_graph, input_replacements)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 53, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
NameError: name ‘UffException’ is not defined

HI p146103,

how can we help?

can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Any usage/source file you can provide will help us debug too.

Linux:16.04.3
Gpu:Gtx 1080
Cuda:9.0
Python:3.5
TensorRt:3.0.4

Linux:16.04.3
Gpu:Gtx 1080
Cuda:9.0
Python:3.5
TensorRt:3.0.4

Hello p146103,

You are using an old TensorRT 3.x. Please update to latest TensorRT 4 (Installation Guide :: NVIDIA Deep Learning TensorRT Documentation).

Hi. I am having the same problem trying to convert the ssd_mobilenet_v2_coco_2018_03_29 specifying the output nodes as [‘num_detections’, ‘detection_boxes’, ‘detection_scores’, ‘detection_classes’, ‘detection_masks’] on the following:
Linux:16.04.3
Gpu:Gtx 1080ti
Cuda:9.0
Cudnn: 7.1.4.18-1+cuda9.0
Python:3.5
TensorRt:4.0.1.6
Uff:0.3.0
Tensorflow: 1.8.0

I am also running everything inside a nvidia tensorrt docker container using nvidia-docker if that helps.

I get no errors while running this without specifying the nodes, but then the parser uses the nodes I do not want. I can clearly see those nodes when I just import the graph using the tensorflow native methods

Any suggestions?

Hello,

I’m using the latest trt container and am able to successfully convert.

nvidia-docker run --name reproduce2365713 -v `pwd`:`pwd` -it --rm  nvcr.io/nvidia/tensorrt:18.08-py3

  pip3 install --upgrade pip
  pip3 install --upgrade tensorflow-gpu
  pip3 install --upgrade tensorflowjs
root@e011c4ae2f5f:/home/scratch.zhenyih_sw/reproduce.2365713# saved_model_cli show --dir ssd_mobilenet_v2_coco_2018_03_29/saved_model --tag_set serve --signature_def serving_default

The given SavedModel SignatureDef contains the following input(s):
  inputs['inputs'] tensor_info:
      dtype: DT_UINT8
      shape: (-1, -1, -1, 3)
      name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100, 4)
      name: detection_boxes:0
  outputs['detection_classes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100)
      name: detection_classes:0
  outputs['detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100)
      name: detection_scores:0
  outputs['num_detections'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1)
      name: num_detections:0
Method name is: tensorflow/serving/predict
root@e011c4ae2f5f:/home/scratch.zhenyih_sw/reproduce.2365713#
root@e011c4ae2f5f:/home/scratch.zhenyih_sw/reproduce.2365713# tensorflowjs_converter \
>     --input_format=tf_saved_model \
>     --output_node_names='detection_boxes,detection_scores,num_detections,detection_classes' \
>     --saved_model_tags=serve \
>     ./ssd_mobilenet_v2_coco_2018_03_29/saved_model \
>     ./ssd_mobilenet_v2_coco_2018_03_29/web_model
Using TensorFlow backend.
2018-09-10 17:22:00.315417: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-09-10 17:22:19.838426: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:366] Optimization results for grappler item: graph_to_optimize
2018-09-10 17:22:19.838648: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   debug_stripper: Graph size after: 7975 nodes (0), 12256 edges (0), time = 8.848ms.
2018-09-10 17:22:19.838670: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   model_pruner: Graph size after: 7540 nodes (-435), 11821 edges (-435), time = 55.349ms.
2018-09-10 17:22:19.838703: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   constant folding: Graph size after: 5943 nodes (-1597), 9990 edges (-1831), time = 586.695ms.
2018-09-10 17:22:19.838875: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   arithmetic_optimizer: Graph size after: 4367 nodes (-1576), 8435 edges (-1555), time = 452.901ms.
2018-09-10 17:22:19.838940: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   dependency_optimizer: Graph size after: 4325 nodes (-42), 8358 edges (-77), time = 101.616ms.
2018-09-10 17:22:19.839280: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   model_pruner: Graph size after: 4325 nodes (0), 8358 edges (0), time = 23.915ms.
2018-09-10 17:22:19.839595: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   constant folding: Graph size after: 4325 nodes (0), 8358 edges (0), time = 322.115ms.
2018-09-10 17:22:19.839724: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   arithmetic_optimizer: Graph size after: 4323 nodes (-2), 8354 edges (-4), time = 342.621ms.
2018-09-10 17:22:19.839760: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   dependency_optimizer: Graph size after: 4323 nodes (0), 8354 edges (0), time = 95.786ms.
2018-09-10 17:22:19.839905: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   debug_stripper: Graph size after: 4323 nodes (0), 8354 edges (0), time = 10.433ms.
2018-09-10 17:22:19.839992: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   model_pruner: Graph size after: 4323 nodes (0), 8354 edges (0), time = 21.198ms.
2018-09-10 17:22:19.840085: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   constant folding: Graph size after: 4323 nodes (0), 8354 edges (0), time = 162.039ms.
2018-09-10 17:22:19.840239: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   arithmetic_optimizer: Graph size after: 4323 nodes (0), 8354 edges (0), time = 241.887ms.
2018-09-10 17:22:19.840318: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   dependency_optimizer: Graph size after: 4323 nodes (0), 8354 edges (0), time = 100.644ms.
2018-09-10 17:22:19.840328: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   model_pruner: Graph size after: 4323 nodes (0), 8354 edges (0), time = 25.042ms.
2018-09-10 17:22:19.840335: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   constant folding: Graph size after: 4323 nodes (0), 8354 edges (0), time = 163.567ms.
2018-09-10 17:22:19.840440: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   arithmetic_optimizer: Graph size after: 4323 nodes (0), 8354 edges (0), time = 234.033ms.
2018-09-10 17:22:19.840488: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:368]   dependency_optimizer: Graph size after: 4323 nodes (0), 8354 edges (0), time = 95.59ms.
Writing weight file ./ssd_mobilenet_v2_coco_2018_03_29/web_model/tensorflowjs_model.pb...

add “from uff.model.exceptions import UffException” to the beginning of

/usr/lib/python{Version}/dist-packages/uff/converters/tensorflow/converter.py

Hi NVES. There might have been a misunderstanding. If I understood correctly, you have converted the model to be used with tensorflow js. I wanted to convert it to be used with TensorRT. I was trying to use the UFF parser.

Thanks.

I will add the import to the file, but that’s kinda weird we need to add the import in the production code of such company as Nvidia. Maybe now the exception will be more representative

I’m also getting this exception.

Traceback (most recent call last):
  File "convert_to_tensorrt.py", line 54, in <module>
    convert_from_frozen_graph(FLAGS.input_file)
  File "convert_to_tensorrt.py", line 41, in convert_from_frozen_graph
    uff_model = uff.from_tensorflow_frozen_model(modelpath, ["output"])
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
    name="main")
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 53, in convert_tf2uff_node
    raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
NameError: global name 'UffException' is not defined

My system:
Driver Version: 384.145
GPU: GeForce GTX 1080

zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ lsb_release -a
LSB Version:	core-9.20160110ubuntu0.2-amd64:core-9.20160110ubuntu0.2-noarch:printing-9.20160110ubuntu0.2-amd64:printing-9.20160110ubuntu0.2-noarch:security-9.20160110ubuntu0.2-amd64:security-9.20160110ubuntu0.2-noarch
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.5 LTS
Release:	16.04
Codename:	xenial
zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ dpkg --list | grep cuda-9-0
ii  cuda-9-0                                                   9.0.176-1                                             amd64        CUDA 9.0 meta-package
zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ dpkg -l | grep TensorRT
ii  graphsurgeon-tf                                            4.1.2-1+cuda9.0                                       amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-dev                                             4.1.2-1+cuda9.0                                       amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                         4.1.2-1+cuda9.0                                       amd64        TensorRT samples and documentation
ii  libnvinfer4                                                4.1.2-1+cuda9.0                                       amd64        TensorRT runtime libraries
ii  python-libnvinfer                                          4.1.2-1+cuda9.0                                       amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                      4.1.2-1+cuda9.0                                       amd64        Python development package for TensorRT
ii  python-libnvinfer-doc                                      4.1.2-1+cuda9.0                                       amd64        Documention and samples of python bindings for TensorRT
ii  tensorrt                                                   4.0.1.6-1+cuda9.0                                     amd64        Meta package of TensorRT
ii  uff-converter-tf                                           4.1.2-1+cuda9.0                                       amd64        UFF converter for TensorRT package
zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ dpkg -l | grep cudnn
ii  libcudnn7                                                  7.1.3.16-1+cuda9.0                                    amd64        cuDNN runtime libraries
ii  libcudnn7-dev                                              7.1.3.16-1+cuda9.0                                    amd64        cuDNN development libraries and headers
zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ pip freeze | grep uff
You are using pip version 9.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
uff==0.4.0
zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ pip freeze | grep tensorflow
You are using pip version 9.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
tensorflow-gpu==1.8.0
tensorflow-tensorboard==0.4.0rc3
zac@zac-prodesk:~/lib/tensorflow-yolo-v3$ python --version
Python 2.7.12

Something that’s interesting is that the version value in uff/init.py says 0.3.0 but the pip package version says 0.4.0. Not sure if this is just a bug in the package or if there’s something wrong with my installation. I did re-install the apt package just to be sure.

zac@zac-prodesk:~$ dpkg-query -L uff-converter-tf | grep uff/__init__.py
/usr/lib/python3.5/dist-packages/uff/__init__.py
/usr/lib/python2.7/dist-packages/uff/__init__.py
zac@zac-prodesk:~$ tail -n 1 /usr/lib/python2.7/dist-packages/uff/__init__.py
__version__ = '0.3.0'

@zacwite There is a bug in the current uff parser python package. Go to the source of the file and do this:

But your main problem lies deeper. It’s most likely the fact that you are trying to convert the YOLO, it contains some operations that the TensorRT does not support yet. And the TensorRT seems to be ignoring the identity operations (I presume, not confirmed), so you might try to visualize the model in tensorboard and instead of whatever node you are specifying for the output, change it for the node right before it. There are probably some people that have converted it, you can look for some other YOLO related topics on the forum.

I’ve already converted the YOLO model to tensorflow and froze the model with weights as a .pb file. I assume tensorrt supports all tensorflow operations, correct?

I tried adding this line to the beginning of converter.py, which removed the error about ‘UffException’ is not defined, but it still isn’t able to find the “output” operation in the graph. The code I show above demontrates that it does, in fact exist.

@zacvite how can I find converter.py file
What is the location of this file ?
Is this file inside the tensorRt folder?
Thanks

Unfortunately, it does not. Yet. They plan to implement all operations, but for now only a subset of them are supported. You can see all currently supported ones here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#build_model

In order to convert the model to UFF you will probably need to also use graphsurgeon (look it up). But this will only convert it to the UFF, later on if you want to create an engine using that UFF model you will have to write or find some custom layers(there is an example out there), but this requires deep understanding of how this stuff works and some good knowledge of C++ and CUDA.

What you should do: search for people that have already converted it and ask them how they did it.

Hello,

Not all TF operations are supported by TRT, please reference Developer Guide :: NVIDIA Deep Learning TensorRT Documentation.

@p146103 It’s right here:

/usr/lib/python{Version}/dist-packages/uff/converters/tensorflow/converter.py

It’s in the message above.

/usr/lib/python{Version}/dist-packages/uff/converters/tensorflow/converter.py

The error I’m getting after fixing converter.py is “uff.model.exceptions.UffException: output was not found in the graph. Please use the -l option to list nodes in the graph.”

Is this the expected error if one of the tensorflow operations was not supported?