Where is the right tutorial for NANO detection implementation?

Is’t the repo:

I tried but it looks like it not work. Anyone can lead me to the right links?

Best

i found another link which is more recently updated:

But when I follow this step:
sudo pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 enum34 futures protobuf

it just stuck there as following without any further progress:

camvi@nvidia-nano:~$ sudo pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 enum34 futures protobuf
WARNING: The directory ‘/home/camvi/.cache/pip’ or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.
Collecting numpy==1.16.1
Downloading numpy-1.16.1.zip (5.1 MB)
|████████████████████████████████| 5.1 MB 4.8 MB/s
Collecting future==0.17.1
Downloading future-0.17.1.tar.gz (829 kB)
|████████████████████████████████| 829 kB 10.8 MB/s
Collecting mock==3.0.5
Downloading mock-3.0.5-py2.py3-none-any.whl (25 kB)
Collecting h5py==2.9.0
Downloading h5py-2.9.0.tar.gz (287 kB)
|████████████████████████████████| 287 kB 6.5 MB/s

Hi xiamenhai, can you try running pip3 with --verbose flag to get more information?

The last few line of info. when using pip3 --verbose …

Downloading h5py-2.9.0.tar.gz (287 kB)
|████████████████████████████████| 287 kB 14.3 MB/s
Added h5py==2.9.0 from https://files.pythonhosted.org/packages/43/27/a6e7dcb8ae20a4dbf3725321058923fec262b6f7835179d78ccc8d98deec/h5py-2.9.0.tar.gz#sha256=9d41ca62daf36d6b6515ab8765e4c8c4388ee18e2a665701fef2b41563821002 to build tracker ‘/tmp/pip-req-tracker-ry65d040’
Running setup.py (path:/tmp/pip-install-msg0vum3/h5py/setup.py) egg_info for package h5py
Running command python setup.py egg_info
WARNING: The directory ‘/home/camvi/.cache/pip’ or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.
WARNING: The directory ‘/home/camvi/.cache/pip’ or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.

I tried with ‘-H’ flag. It did the job. But it took 30 mins to finish…

sudo -H pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 enum34 futures protobuf

An new issue about installing pycuda:

I tried to install the pycuda through:
pip3 install pycuda --user

But got following error. Do you have any ideas? Where is the right way to install it?

Running setup.py clean for pycuda
Failed to build pycuda
Installing collected packages: pycuda
Running setup.py install for pycuda … error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c ‘import sys, setuptools, tokenize; sys.argv[0] = ‘"’"’/tmp/pip-install-nc3x8n1v/pycuda/setup.py’“'”‘; file=’“'”‘/tmp/pip-install-nc3x8n1v/pycuda/setup.py’“'”‘;f=getattr(tokenize, ‘"’“‘open’”’“‘, open)(file);code=f.read().replace(’”‘"’\r\n’“'”‘, ‘"’"’\n’“'”‘);f.close();exec(compile(code, file, ‘"’“‘exec’”’“‘))’ install --record /tmp/pip-record-pr9wyucd/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/camvi/.local/include/python3.6m/pycuda
cwd: /tmp/pip-install-nc3x8n1v/pycuda/
Complete output (106 lines):
***************************************************************
*** WARNING: nvcc not in path.
*** May need to set CUDA_INC_DIR for installation to succeed.
***************************************************************
*************************************************************
*** I have detected that you have not run configure.py.
*************************************************************
*** Additionally, no global config files were found.
*** I will go ahead with the default configuration.
*** In all likelihood, this will not work out.
***
*** See README_SETUP.txt for more information.
***
*** If the build does fail, just re-run configure.py with the
*** correct arguments, and then retry. Good luck!
*************************************************************
*** HIT Ctrl-C NOW IF THIS IS NOT WHAT YOU WANT
*************************************************************
Continuing in 1 seconds…
/usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: ‘test_requires’
warnings.warn(msg)
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/pycuda
copying pycuda/debug.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/elementwise.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/characterize.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/curandom.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/driver.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/reduction.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/gpuarray.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/autoinit.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/_cluda.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/scan.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/tools.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/_mymako.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/init.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/cumath.py → build/lib.linux-aarch64-3.6/pycuda
copying pycuda/compiler.py → build/lib.linux-aarch64-3.6/pycuda
creating build/lib.linux-aarch64-3.6/pycuda/gl
copying pycuda/gl/autoinit.py → build/lib.linux-aarch64-3.6/pycuda/gl
copying pycuda/gl/init.py → build/lib.linux-aarch64-3.6/pycuda/gl
creating build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/coordinate.py → build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/pkt_build.py → build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/cg.py → build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/inner.py → build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/packeted.py → build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/init.py → build/lib.linux-aarch64-3.6/pycuda/sparse
copying pycuda/sparse/operator.py → build/lib.linux-aarch64-3.6/pycuda/sparse
creating build/lib.linux-aarch64-3.6/pycuda/compyte
copying pycuda/compyte/array.py → build/lib.linux-aarch64-3.6/pycuda/compyte
copying pycuda/compyte/dtypes.py → build/lib.linux-aarch64-3.6/pycuda/compyte
copying pycuda/compyte/init.py → build/lib.linux-aarch64-3.6/pycuda/compyte
running egg_info
writing pycuda.egg-info/PKG-INFO
writing dependency_links to pycuda.egg-info/dependency_links.txt
writing requirements to pycuda.egg-info/requires.txt
writing top-level names to pycuda.egg-info/top_level.txt
reading manifest file ‘pycuda.egg-info/SOURCES.txt’
reading manifest template ‘MANIFEST.in’
warning: no files found matching ‘doc/source/_static/.css’
warning: no files found matching 'doc/source/_templates/
.html’
warning: no files found matching ‘.cpp’ under directory ‘bpl-subset/bpl_subset/boost’
warning: no files found matching '
.html’ under directory ‘bpl-subset/bpl_subset/boost’
warning: no files found matching ‘.inl’ under directory ‘bpl-subset/bpl_subset/boost’
warning: no files found matching '
.txt’ under directory ‘bpl-subset/bpl_subset/boost’
warning: no files found matching ‘.h’ under directory ‘bpl-subset/bpl_subset/libs’
warning: no files found matching '
.ipp’ under directory ‘bpl-subset/bpl_subset/libs’
warning: no files found matching ‘*.pl’ under directory ‘bpl-subset/bpl_subset/libs’
writing manifest file ‘pycuda.egg-info/SOURCES.txt’
creating build/lib.linux-aarch64-3.6/pycuda/cuda
copying pycuda/cuda/pycuda-complex-impl.hpp → build/lib.linux-aarch64-3.6/pycuda/cuda
copying pycuda/cuda/pycuda-complex.hpp → build/lib.linux-aarch64-3.6/pycuda/cuda
copying pycuda/cuda/pycuda-helpers.hpp → build/lib.linux-aarch64-3.6/pycuda/cuda
copying pycuda/sparse/pkt_build_cython.pyx → build/lib.linux-aarch64-3.6/pycuda/sparse
running build_ext
building ‘_driver’ extension
creating build/temp.linux-aarch64-3.6
creating build/temp.linux-aarch64-3.6/src
creating build/temp.linux-aarch64-3.6/src/cpp
creating build/temp.linux-aarch64-3.6/src/wrapper
creating build/temp.linux-aarch64-3.6/bpl-subset
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/python
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/python/src
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/python/src/object
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/python/src/converter
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/system
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/system/src
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/smart_ptr
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/smart_ptr/src
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/thread
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/thread/src
creating build/temp.linux-aarch64-3.6/bpl-subset/bpl_subset/libs/thread/src/pthread
aarch64-linux-gnu-gcc -pthread -fwrapv -Wall -O3 -DNDEBUG -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DBOOST_ALL_NO_LIB=1 -DBOOST_THREAD_BUILD_DLL=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_PYTHON_SOURCE=1 -Dboost=pycudaboost -DBOOST_THREAD_DONT_USE_CHRONO=1 -DPYGPU_PACKAGE=pycuda -DPYGPU_PYCUDA=1 -DHAVE_CURAND=1 -Isrc/cpp -Ibpl-subset/bpl_subset -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/usr/include/python3.6m -c src/cpp/cuda.cpp -o build/temp.linux-aarch64-3.6/src/cpp/cuda.o
In file included from src/cpp/cuda.cpp:4:0:
src/cpp/cuda.hpp:14:10: fatal error: cuda.h: No such file or directory
#include <cuda.h>
^~~~~~~~
compilation terminated.
error: command ‘aarch64-linux-gnu-gcc’ failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '”‘"’/tmp/pip-install-nc3x8n1v/pycuda/setup.py’“'”‘; file=’“'”‘/tmp/pip-install-nc3x8n1v/pycuda/setup.py’“'”‘;f=getattr(tokenize, ‘"’“‘open’”’“‘, open)(file);code=f.read().replace(’”‘"’\r\n’“'”‘, ‘"’"’\n’“'”‘);f.close();exec(compile(code, file, ‘"’“‘exec’”’"‘))’ install --record /tmp/pip-record-pr9wyucd/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/camvi/.local/include/python3.6m/pycuda Check the logs for full command output.

Add these lines to the bottom of your ~/.bashrc file:

export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Then close your terminal, open a new one, and try installing pycuda again.

Thank you. it did the job.

When doing the build with ssd_mobilenet_v1 (i just download from TF detection zoon and retrained the model with my own data), i met an error:
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time

Do you have any ideas? what’s missing here?
Many thanks!

details

amvi@nvidia-nano:~/zhen/tensorrt_demos/ssd$ ./build_engines.sh

for model in ssd_mobilenet_v1_egohands
python3 build_engine.py ssd_mobilenet_v1_egohands
[TensorRT] INFO: Plugin Creator registration succeeded - GridAnchor_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - NMS_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - Reorg_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - Region_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - Clip_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - LReLU_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - PriorBox_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - Normalize_TRT
[TensorRT] INFO: Plugin Creator registration succeeded - RPROI_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py:123: FastGFile.init (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting MultipleGridAnchorGenerator as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
No. nodes: 450
UFF Output written to /home/camvi/zhen/tensorrt_demos/ssd/tmp_v1_egohands.uff
UFF Text Output written to /home/camvi/zhen/tensorrt_demos/ssd/tmp_v1_egohands.pbtxt
[TensorRT] INFO: UFFParser: parsing Input
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_0/weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D
[TensorRT] INFO: UFFParser: Convolution: add Padding Layer to support asymmetric padding
[TensorRT] INFO: UFFParser: Convolution: Left: 0
[TensorRT] INFO: UFFParser: Convolution: Right: 1
[TensorRT] INFO: UFFParser: Convolution: Top: 0
[TensorRT] INFO: UFFParser: Convolution: Bottom: 1
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/gamma
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/beta
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/moving_mean
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/moving_variance
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 9) to 0.0472441
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 11) to 0.0472441
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/BatchNorm/gamma
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/BatchNorm/beta
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/BatchNorm/moving_mean
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/BatchNorm/moving_variance
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 19) to 0.0472441
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 21) to 0.0472441
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_1_pointwise/weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/Conv2D

[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_10_depthwise/BatchNorm/moving_variance
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_depthwise/BatchNorm/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_depthwise/Relu6
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 201) to 0.0472441
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 203) to 0.0472441
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_10_pointwise/weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Conv2D
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_10_pointwise/BatchNorm/gamma
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_10_pointwise/BatchNorm/beta
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_10_pointwise/BatchNorm/moving_mean
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_10_pointwise/BatchNorm/moving_variance
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_pointwise/BatchNorm/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Relu6
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 211) to 0.0472441
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 213) to 0.0472441
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_depthwise/depthwise_weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/depthwise
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_depthwise/BatchNorm/gamma
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_depthwise/BatchNorm/beta
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_depthwise/BatchNorm/moving_mean
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_depthwise/BatchNorm/moving_variance
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/BatchNorm/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/Relu6
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 221) to 0.0472441
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 223) to 0.0472441
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_pointwise/weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Conv2D
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_pointwise/BatchNorm/gamma
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_pointwise/BatchNorm/beta
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_pointwise/BatchNorm/moving_mean
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_11_pointwise/BatchNorm/moving_variance
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_pointwise/BatchNorm/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Relu6
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 231) to 0.0472441
[TensorRT] INFO: Setting Dynamic Range for (Unnamed ITensor* 233) to 0.0472441
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/BoxEncodingPredictor/weights
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/BoxEncodingPredictor/Conv2D
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/BoxEncodingPredictor/biases
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/BoxEncodingPredictor/BiasAdd
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Shape
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/strided_slice/stack
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/strided_slice/stack_1
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/strided_slice/stack_2
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/strided_slice
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape/shape/1
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape/shape/2
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape/shape/3
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape/shape
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “build_engine.py”, line 218, in
main()
File “build_engine.py”, line 212, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

I have output similar to above with ssd_mobilenet_v2:

python3 main.py 000012.jpg 
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace: 
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
input: "Cast"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: 3
      }
      dim {
        size: 300
      }
      dim {
        size: 300
      }
    }
  }
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/Conv_1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_15/add as custom op: AddV2
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_14/add as custom op: AddV2
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_13/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_13/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_12/add as custom op: AddV2
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_11/add as custom op: AddV2
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_10/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_10/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_9/add as custom op: AddV2
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_8/add as custom op: AddV2
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_7/add as custom op: AddV2
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_6/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_6/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_5/add as custom op: AddV2
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_4/add as custom op: AddV2
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_3/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_3/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_2/add as custom op: AddV2
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_1/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_1/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/Conv/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: Cast yet.
Converting Cast as custom op: Cast
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_2/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_2/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_4/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_4/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_5/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_5/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_7/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_7/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_8/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_8/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_9/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_9/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_11/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_11/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_12/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_12/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_14/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_14/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_15/project/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_15/expand/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['NMS'] as outputs
No. nodes: 644
UFF Output written to tmp.uff
[TensorRT] ERROR: UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
  File "main.py", line 43, in <module>
    buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'