Deepstream Object Detector SSD - cannot convert to UFF file in DS5.0GA

Following the instructions in the README (/opt/nvidia/deepstream/deepstream/sources/objectDetector_SSD/README), when I issue the command:

$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py \
      frozen_inference_graph.pb -O NMS \
      -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
      -o sample_ssd_relu6.uff

It fails with this error:

2020-08-07 14:49:01.364038: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Loading frozen_inference_graph.pb
Traceback (most recent call last):
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 143, in <module>
    main()
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 139, in main
    debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 274, in from_tensorflow_frozen_model
    with tf.gfile.GFile(frozen_file, "rb") as frozen_pb:
AttributeError: module 'tensorflow' has no attribute 'gfile'

I have installed the latest tensorflow (2.2.0) as per the README.

I can fix this with an edit as follows to line #274 of

/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py

Change:

with tf.gfile.GFile(frozen_file, "rb") as frozen_pb:

to

with tf.io.gfile.GFile(frozen_file, "rb") as frozen_pb:

But there are further errors…

Warning: No conversion function registered for layer: NMS_TRT yet.
AttributeError: module 'tensorflow' has no attribute 'AttrValue'

This used to work just fine in 5.0 developer preview so what has happened? Have the instructions not been updated or something???

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5.0GA

Hi @jasonpgf2a,
Which TensorFlow did you install?

In https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#install

Note: As of the 20.02 TensorFlow release, the package name has changed from tensorflow-gpu to tensorflow. See the section on Upgrading TensorFlow for more information.

Install TensorFlow using the pip3 command. This command will install the latest version of TensorFlow compatible with JetPack 4.4.

$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow

Note: TensorFlow version 2 was recently released and is not fully backward compatible with TensorFlow 1.x. If you would prefer to use a TensorFlow 1.x package, it can be installed by specifying the TensorFlow version to be less than 2, as in the following command:

$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 ‘tensorflow<2’

If you installed the first one, could you try the 2nd one (tensorflow<2) ?

I installed TF 2.2.0 which is the current version and this is as per the instructions in the README.
Also - this exact same process worked just fine for me on Deepstream 5.0 developer preview on both a nano and xavier nx.

Why would TF2 work in the dev preview and not in the GA version of deepstream? Sounds like a bug to me…

Are you able to follow along with the instructions in the README to duplicate the issue?

Any ideas here… This worked with the same version of TF in DS5.0dp. ??

I’m running into the same issue. The README references python2.7, which I substituted with python3.6 as well. I’m running tensorflow 2.2.0+nv20.6.

This must have been tested with tf < 2 as the namespaces changed.

@mchi @jasonpgf2a uninstalling and downgrading to 1.15.3 compiled the uff successfully with the following. Not suggesting its the best answer, but it worked.

sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==1.15.3+nv20.7

I’ve had a lot of issues moving between my now THREE flavors of Nano images, DLI for Intro, DLI for Deepstream, and Developer. I’m very appreciative of all the work that went into making the material and coursework available. The slight differences between the images make transferring the learning(see what I did there) challenging.

I hope to get the SSDV2 up tonight on a webcam and CSI stream with DS5, and will report back on FPS.

1 Like

Success! Confirm operation of SSDMobilenetV2 under DS5 using tensorflow 1.15.3 . I thought I would be able to bring over my compiled files from the DLI DS5 image but I had a few issues out of nvinfer for lib errors. After building everything over again it works.

1 Like

Thanks for letting us know @spacebaseone.

I’ll hold of a little longer before I backport to an older version of TF. This actually worked in DS5.0 developer preview with a slight change to the command to use python 3 instead of 2.

Hoping NVIDIA will look at this soon…

Hi @jasonpgf2a
As you can find below Compatibility notice in TensorRT 7.1.x release note - https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-7.html#rel_7-1-3 , TRT7.1 tthat is theTRT version in JP4.4 was tested with TensorFlow 1.15.2, so the convert-to-uff is expected to be not well compatible with TF2.x.

And, in furture TRT releease (maybe TRT9.x), uff support will be deprcated, so uff support will not be improved a lot, and we recommend to convert the tf model to onnx mode for TRT inference.

Compatibility

Thanks!

Thanks @mchi

Strange that this worked very well with the developer preview then?

I’m just using the standard sample ssd modeel that you (nvidia) provide, so given this guidance about uff being deprecated why doesn’t nvidia provide the ssd model in inbox format?
Maybe the instructions in the ssd frame could be changed to show us how to do it as a tutorial…

@spacebaseone following your instructions and it all worked. Thx.