Yolo V3 to TensorRT on Ubuntu 18.04 with GeForce GTX 1050Ti

Provide details on the platforms you are using:
Linux distro: Ubuntu 18.04.3 LTS bionic
GPU type: GTX 1050Ti
Nvidia driver version: 440.33.01
CUDA version: 10.2 (according to nvidia-smi) and 9.1.85(according to nvcc --version)
CUDNN version: 7.6.5 according to ( CUDNN_H_PATH=(whereis cudnn.h) and cat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2)
Python version: Python 2.7.17
TensorRT version: 7.0.0 (according to dpkg -l | grep nvinfer)

Describe the problem

I am trying to run YoloV3 by converting it into TensorRT

https://gitlab.com/aminehy/YOLOv3-Darknet-ONNX-TensorRT/tree/master

Downloaded TensorRT https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#downloading

nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.0.0.11-ga-20191216_1-1_amd64.deb
Ran

os=”ubuntu1804”
tag=”cudax.x-trt7.x.x.x-ga-yyyymmdd”
sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub

sudo apt-get update
sudo apt-get install tensorrt

python 2

sudo apt-get install python-libnvinfer-dev

python 3

sudo apt-get install python3-libnvinfer-dev

TensorFlow

sudo apt-get install uff-converter-tf

Verified

dpkg -l | grep TensorRT

Got

ii  graphsurgeon-tf                                              7.0.0-1+cuda10.2                                amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                               7.0.0-1+cuda10.2                                amd64        TensorRT binaries
ii  libnvinfer-dev                                               7.0.0-1+cuda10.2                                amd64        TensorRT development libraries and headers
ii  libnvinfer-doc                                               7.0.0-1+cuda10.2                                all          TensorRT documentation
ii  libnvinfer-plugin-dev                                        7.0.0-1+cuda10.2                                amd64        TensorRT plugin libraries
ii  libnvinfer-plugin7                                           7.0.0-1+cuda10.2                                amd64        TensorRT plugin libraries
ii  libnvinfer-samples                                           7.0.0-1+cuda10.2                                all          TensorRT samples
ii  libnvinfer7                                                  7.0.0-1+cuda10.2                                amd64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                                         7.0.0-1+cuda10.2                                amd64        TensorRT ONNX libraries
ii  libnvonnxparsers7                                            7.0.0-1+cuda10.2                                amd64        TensorRT ONNX libraries
ii  libnvparsers-dev                                             7.0.0-1+cuda10.2                                amd64        TensorRT parsers libraries
ii  libnvparsers7                                                7.0.0-1+cuda10.2                                amd64        TensorRT parsers libraries
ii  python-libnvinfer                                            7.0.0-1+cuda10.2                                amd64        Python bindings for TensorRT
ii  python3-libnvinfer                                           7.0.0-1+cuda10.2                                amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                       7.0.0-1+cuda10.2                                amd64        Python 3 development package for TensorRT
ii  tensorrt                                                     7.0.0.11-1+cuda10.2                             amd64        Meta package of TensorRT
ii  uff-converter-tf                                             7.0.0-1+cuda10.2                                amd64        UFF converter for TensorRT package

Downloaded and installed
cuda-repo-ubuntu1804_10.2.89-1_amd64.deb

os=”ubuntu1x04”
cuda=”x.y.z”
wget https://developer.download.nvidia.com/compute/cuda/repos/${os}/x86_64/cuda-repo-${os}_${cuda}-1_amd64.deb
sudo dpkg -i cuda-repo-*.deb

NVIDIA Machine Learning network repository
nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb

os=”ubuntu1x04”
wget https://developer.download.nvidia.com/compute/machine-learning/repos/${os}/x86_64/nvidia-machine-learning-repo-${os}_1.0.0-1_amd64.deb

sudo dpkg -i nvidia-machine-learning-repo-*.deb
sudo apt-get update

For running TensorRT Python

sudo apt-get install python-libnvinfer python3-libnvinfer

Checked the installation
Python 2

Python 2.7.17 (default, Nov  7 2019, 10:07:09) 
[GCC 7.4.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt
>>>

Ran yolov3_to_onnx.py inside YOLOv3-Darknet-ONNX-TensorRT

Ouput

Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
graph YOLOv3-608 (
  %000_net[FLOAT, 64x3x416x416]
) optional inputs with matching initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]
  %001_convolutional_bn_mean[FLOAT, 32]
......
......

volutional_conv_weights)
  %105_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%105_convolutional, %105_convolutional_bn_scale, %105_convolutional_bn_bias, %105_convolutional_bn_mean, %105_convolutional_bn_var)
  %105_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%105_convolutional_bn)
  %106_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%105_convolutional_lrelu, %106_convolutional_conv_weights, %106_convolutional_conv_bias)
  return %082_convolutional, %094_convolutional, %106_convolutional
}

python 3

Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt
>>>

Ran python onnx_to_tensorrt.py

Outputs this ERROR

[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
  File "onnx_to_tensorrt.py", line 192, in <module>
    main()
  File "onnx_to_tensorrt.py", line 131, in main
    with get_engine(onnx_file_path, engine_file_path) as engine, engine.create_execution_context() as context:
AttributeError: __exit__

I also change line 62 reduced the max_workspace_size inside python onnx_to_tensorrt.py from

builder.max_workspace_size = 1 << 30  # 1GB

to

builder.max_workspace_size = 1 << 15  # 1GB

Tried same thing using container https://github.com/NVIDIA/TensorRT/issues/302 Same Error
What is going wrong?

Hi,

Can you try to run this sample using our NGC container? I wasn’t able to reproduce your issue, the sample worked for me - but this was on a V100 GPU.

Commands:

nvidia-docker run -it -v ${PWD}:/mnt nvcr.io/nvidia/tensorrt:19.12-py2
/opt/tensorrt/python/python_setup.sh
cd /opt/tensorrt/samples/python/yolov3_onnx/
python yolov3_to_onnx.py
python onnx_to_tensorrt.py
root@a706cc944e40:/workspace/tensorrt/samples/python/yolov3_onnx# python onnx_to_tensorrt.py
Downloading from https://github.com/pjreddie/darknet/raw/f86901f6177dfc6116360a13cc06ab680e0c86b0/data/dog.jpg, this may take a while...
100% [............................................................................] 163759 / 163759
Loading ONNX file from path yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file yolov3.onnx; this may take a while...
Completed creating Engine
Running inference on image dog.jpg...
[[135.14841098 219.59878846 184.30208646 324.0265199 ]
 [ 98.30807283 135.72612824 499.71261624 299.25580544]
 [478.00606086  81.25701542 210.57787267  86.91503773]] [0.99854713 0.99880403 0.93829264] [16  1  7]
Saved image with bounding boxes of detected objects to dog_bboxes.png.

Could you please test it out on a local system, with GTX 1050Ti?

I’ve modified the TensorRT ‘yolov3_onnx’ sample so that it could do object detection with real-time camera/video inputs. I’ve tested the code on a Ubuntu 18.04 x86_64 PC with GeForce GTX-2080 Ti. And I was getting 30~31 FPS for TensorRT (5.0.5) optimized ‘yolov3-608’ and ~52 FPS for ‘yolov3-416’.

https://github.com/jkjung-avt/tensorrt_demos#yolov3
https://jkjung-avt.github.io/tensorrt-yolov3/

I’m sure the code works for both TensorRT 5 and 6, but have yet to test TensorRT 7. Feel free to give it a try. I welcome feedbacks.

My info

  • TensorRT version: 7.0.0.11
  • Cuda version: 10.2
  • TensorFlow-gpu: 1.14.0
  • Cudnn version: 7.6.5
  • GPU: GTX1060
  • Ubuntu: 18.04

I can run the yolo v3 sample for python on a local system.

Command:

python2 yolov3_to_onnx.py

I got the outputs:

Collecting onnx
  Downloading https://files.pythonhosted.org/packages/db/2b/cf306bf1e32cd5e34b6fca83b1ed53a8e82827404b15f4d6398990d82090/onnx-1.6.0-cp27-cp27mu-manylinux1_x86_64.whl (4.8MB)
     |████████████████████████████████| 4.8MB 960kB/s 
Collecting six
  Downloading https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl
Collecting numpy
  Downloading https://files.pythonhosted.org/packages/d7/b1/3367ea1f372957f97a6752ec725b87886e12af1415216feec9067e31df70/numpy-1.16.5-cp27-cp27mu-manylinux1_x86_64.whl (17.0MB)
     |████████████████████████████████| 17.0MB 13.0MB/s 
Collecting typing>=3.6.4; python_version < "3.5" 
  Downloading https://files.pythonhosted.org/packages/22/30/64ca29543375759dc589ade14a6cd36382abf2bec17d67de8481bc9814d7/typing-3.7.4.1-py2-none-any.whl
Collecting protobuf
  Downloading https://files.pythonhosted.org/packages/13/5c/ba4572a4d952b8db68c4534168a6d2a946b354de5e2b779efb44d4d0b72c/protobuf-3.11.2-cp27-cp27mu-manylinux1_x86_64.whl (1.3MB)
     |████████████████████████████████| 1.3MB 15.5MB/s 
Collecting typing-extensions>=3.6.2.1
  Downloading https://files.pythonhosted.org/packages/89/0b/611af6b186e4e59e290fee1a5b4a6c47a4ce29d7cb9b5141fc73c38d8b65/typing_extensions-3.7.4.1-py2-none-any.whl
Requirement already satisfied: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf->onnx) (42.0.2)
Installing collected packages: six, numpy, typing, protobuf, typing-extensions, onnx
Successfully installed numpy-1.16.5 onnx-1.6.0 protobuf-3.11.2 six-1.13.0 typing-3.7.4.1 typing-extensions-3.7.4.1
yolov3_onnx ›› python2 yolov3_to_onnx.py                                                                                                                                                                        
Downloading from https://pjreddie.com/media/files/yolov3.weights, this may take a while...
100% [......................................................................] 248007048 / 248007048
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
graph YOLOv3-608 (
  %000_net[FLOAT, 64x3x608x608]
) optional inputs with matching initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]
  %001_convolutional_bn_mean[FLOAT, 32]
  %001_convolutional_bn_var[FLOAT, 32]
  %001_convolutional_conv_weights[FLOAT, 32x3x3x3]
  %002_convolutional_bn_scale[FLOAT, 64]
  %002_convolutional_bn_bias[FLOAT, 64]
  %002_convolutional_bn_mean[FLOAT, 64]
  %002_convolutional_bn_var[FLOAT, 64]
  %002_convolutional_conv_weights[FLOAT, 64x32x3x3]
  %003_convolutional_bn_scale[FLOAT, 32]
  %003_convolutional_bn_bias[FLOAT, 32]
  %003_convolutional_bn_mean[FLOAT, 32]
  %003_convolutional_bn_var[FLOAT, 32]
  %003_convolutional_conv_weights[FLOAT, 32x64x1x1]
  %004_convolutional_bn_scale[FLOAT, 64]
  %004_convolutional_bn_bias[FLOAT, 64]
  %004_convolutional_bn_mean[FLOAT, 64]
  %004_convolutional_bn_var[FLOAT, 64]
  %004_convolutional_conv_weights[FLOAT, 64x32x3x3]
  %006_convolutional_bn_scale[FLOAT, 128]
  %006_convolutional_bn_bias[FLOAT, 128]
  %006_convolutional_bn_mean[FLOAT, 128]
  %006_convolutional_bn_var[FLOAT, 128]
  %006_convolutional_conv_weights[FLOAT, 128x64x3x3]
  %007_convolutional_bn_scale[FLOAT, 64]
  %007_convolutional_bn_bias[FLOAT, 64]
  %007_convolutional_bn_mean[FLOAT, 64]
  %007_convolutional_bn_var[FLOAT, 64]
  %007_convolutional_conv_weights[FLOAT, 64x128x1x1]
  %008_convolutional_bn_scale[FLOAT, 128]
  %008_convolutional_bn_bias[FLOAT, 128]
  %008_convolutional_bn_mean[FLOAT, 128]
  %008_convolutional_bn_var[FLOAT, 128]
  %008_convolutional_conv_weights[FLOAT, 128x64x3x3]
  %010_convolutional_bn_scale[FLOAT, 64]
  %010_convolutional_bn_bias[FLOAT, 64]
  %010_convolutional_bn_mean[FLOAT, 64]
  %010_convolutional_bn_var[FLOAT, 64]
  %010_convolutional_conv_weights[FLOAT, 64x128x1x1]
  %011_convolutional_bn_scale[FLOAT, 128]
  %011_convolutional_bn_bias[FLOAT, 128]
  %011_convolutional_bn_mean[FLOAT, 128]
  %011_convolutional_bn_var[FLOAT, 128]
  %011_convolutional_conv_weights[FLOAT, 128x64x3x3]
  %013_convolutional_bn_scale[FLOAT, 256]
  %013_convolutional_bn_bias[FLOAT, 256]
  %013_convolutional_bn_mean[FLOAT, 256]
  %013_convolutional_bn_var[FLOAT, 256]
  %013_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %014_convolutional_bn_scale[FLOAT, 128]
  %014_convolutional_bn_bias[FLOAT, 128]
  %014_convolutional_bn_mean[FLOAT, 128]
  %014_convolutional_bn_var[FLOAT, 128]
  %014_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %015_convolutional_bn_scale[FLOAT, 256]
  %015_convolutional_bn_bias[FLOAT, 256]
  %015_convolutional_bn_mean[FLOAT, 256]
  %015_convolutional_bn_var[FLOAT, 256]
  %015_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %017_convolutional_bn_scale[FLOAT, 128]
  %017_convolutional_bn_bias[FLOAT, 128]
  %017_convolutional_bn_mean[FLOAT, 128]
  %017_convolutional_bn_var[FLOAT, 128]
  %017_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %018_convolutional_bn_scale[FLOAT, 256]
  %018_convolutional_bn_bias[FLOAT, 256]
  %018_convolutional_bn_mean[FLOAT, 256]
  %018_convolutional_bn_var[FLOAT, 256]
  %018_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %020_convolutional_bn_scale[FLOAT, 128]
  %020_convolutional_bn_bias[FLOAT, 128]
  %020_convolutional_bn_mean[FLOAT, 128]
  %020_convolutional_bn_var[FLOAT, 128]
  %020_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %021_convolutional_bn_scale[FLOAT, 256]
  %021_convolutional_bn_bias[FLOAT, 256]
  %021_convolutional_bn_mean[FLOAT, 256]
  %021_convolutional_bn_var[FLOAT, 256]
  %021_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %023_convolutional_bn_scale[FLOAT, 128]
  %023_convolutional_bn_bias[FLOAT, 128]
  %023_convolutional_bn_mean[FLOAT, 128]
  %023_convolutional_bn_var[FLOAT, 128]
  %023_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %024_convolutional_bn_scale[FLOAT, 256]
  %024_convolutional_bn_bias[FLOAT, 256]
  %024_convolutional_bn_mean[FLOAT, 256]
  %024_convolutional_bn_var[FLOAT, 256]
  %024_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %026_convolutional_bn_scale[FLOAT, 128]
  %026_convolutional_bn_bias[FLOAT, 128]
  %026_convolutional_bn_mean[FLOAT, 128]
  %026_convolutional_bn_var[FLOAT, 128]
  %026_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %027_convolutional_bn_scale[FLOAT, 256]
  %027_convolutional_bn_bias[FLOAT, 256]
  %027_convolutional_bn_mean[FLOAT, 256]
  %027_convolutional_bn_var[FLOAT, 256]
  %027_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %029_convolutional_bn_scale[FLOAT, 128]
  %029_convolutional_bn_bias[FLOAT, 128]
  %029_convolutional_bn_mean[FLOAT, 128]
  %029_convolutional_bn_var[FLOAT, 128]
  %029_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %030_convolutional_bn_scale[FLOAT, 256]
  %030_convolutional_bn_bias[FLOAT, 256]
  %030_convolutional_bn_mean[FLOAT, 256]
  %030_convolutional_bn_var[FLOAT, 256]
  %030_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %032_convolutional_bn_scale[FLOAT, 128]
  %032_convolutional_bn_bias[FLOAT, 128]
  %032_convolutional_bn_mean[FLOAT, 128]
  %032_convolutional_bn_var[FLOAT, 128]
  %032_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %033_convolutional_bn_scale[FLOAT, 256]
  %033_convolutional_bn_bias[FLOAT, 256]
  %033_convolutional_bn_mean[FLOAT, 256]
  %033_convolutional_bn_var[FLOAT, 256]
  %033_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %035_convolutional_bn_scale[FLOAT, 128]
  %035_convolutional_bn_bias[FLOAT, 128]
  %035_convolutional_bn_mean[FLOAT, 128]
  %035_convolutional_bn_var[FLOAT, 128]
  %035_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %036_convolutional_bn_scale[FLOAT, 256]
  %036_convolutional_bn_bias[FLOAT, 256]
  %036_convolutional_bn_mean[FLOAT, 256]
  %036_convolutional_bn_var[FLOAT, 256]
  %036_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %038_convolutional_bn_scale[FLOAT, 512]
  %038_convolutional_bn_bias[FLOAT, 512]
  %038_convolutional_bn_mean[FLOAT, 512]
  %038_convolutional_bn_var[FLOAT, 512]
  %038_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %039_convolutional_bn_scale[FLOAT, 256]
  %039_convolutional_bn_bias[FLOAT, 256]
  %039_convolutional_bn_mean[FLOAT, 256]
  %039_convolutional_bn_var[FLOAT, 256]
  %039_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %040_convolutional_bn_scale[FLOAT, 512]
  %040_convolutional_bn_bias[FLOAT, 512]
  %040_convolutional_bn_mean[FLOAT, 512]
  %040_convolutional_bn_var[FLOAT, 512]
  %040_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %042_convolutional_bn_scale[FLOAT, 256]
  %042_convolutional_bn_bias[FLOAT, 256]
  %042_convolutional_bn_mean[FLOAT, 256]
  %042_convolutional_bn_var[FLOAT, 256]
  %042_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %043_convolutional_bn_scale[FLOAT, 512]
  %043_convolutional_bn_bias[FLOAT, 512]
  %043_convolutional_bn_mean[FLOAT, 512]
  %043_convolutional_bn_var[FLOAT, 512]
  %043_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %045_convolutional_bn_scale[FLOAT, 256]
  %045_convolutional_bn_bias[FLOAT, 256]
  %045_convolutional_bn_mean[FLOAT, 256]
  %045_convolutional_bn_var[FLOAT, 256]
  %045_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %046_convolutional_bn_scale[FLOAT, 512]
  %046_convolutional_bn_bias[FLOAT, 512]
  %046_convolutional_bn_mean[FLOAT, 512]
  %046_convolutional_bn_var[FLOAT, 512]
  %046_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %048_convolutional_bn_scale[FLOAT, 256]
  %048_convolutional_bn_bias[FLOAT, 256]
  %048_convolutional_bn_mean[FLOAT, 256]
  %048_convolutional_bn_var[FLOAT, 256]
  %048_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %049_convolutional_bn_scale[FLOAT, 512]
  %049_convolutional_bn_bias[FLOAT, 512]
  %049_convolutional_bn_mean[FLOAT, 512]
  %049_convolutional_bn_var[FLOAT, 512]
  %049_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %051_convolutional_bn_scale[FLOAT, 256]
  %051_convolutional_bn_bias[FLOAT, 256]
  %051_convolutional_bn_mean[FLOAT, 256]
  %051_convolutional_bn_var[FLOAT, 256]
  %051_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %052_convolutional_bn_scale[FLOAT, 512]
  %052_convolutional_bn_bias[FLOAT, 512]
  %052_convolutional_bn_mean[FLOAT, 512]
  %052_convolutional_bn_var[FLOAT, 512]
  %052_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %054_convolutional_bn_scale[FLOAT, 256]
  %054_convolutional_bn_bias[FLOAT, 256]
  %054_convolutional_bn_mean[FLOAT, 256]
  %054_convolutional_bn_var[FLOAT, 256]
  %054_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %055_convolutional_bn_scale[FLOAT, 512]
  %055_convolutional_bn_bias[FLOAT, 512]
  %055_convolutional_bn_mean[FLOAT, 512]
  %055_convolutional_bn_var[FLOAT, 512]
  %055_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %057_convolutional_bn_scale[FLOAT, 256]
  %057_convolutional_bn_bias[FLOAT, 256]
  %057_convolutional_bn_mean[FLOAT, 256]
  %057_convolutional_bn_var[FLOAT, 256]
  %057_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %058_convolutional_bn_scale[FLOAT, 512]
  %058_convolutional_bn_bias[FLOAT, 512]
  %058_convolutional_bn_mean[FLOAT, 512]
  %058_convolutional_bn_var[FLOAT, 512]
  %058_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %060_convolutional_bn_scale[FLOAT, 256]
  %060_convolutional_bn_bias[FLOAT, 256]
  %060_convolutional_bn_mean[FLOAT, 256]
  %060_convolutional_bn_var[FLOAT, 256]
  %060_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %061_convolutional_bn_scale[FLOAT, 512]
  %061_convolutional_bn_bias[FLOAT, 512]
  %061_convolutional_bn_mean[FLOAT, 512]
  %061_convolutional_bn_var[FLOAT, 512]
  %061_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %063_convolutional_bn_scale[FLOAT, 1024]
  %063_convolutional_bn_bias[FLOAT, 1024]
  %063_convolutional_bn_mean[FLOAT, 1024]
  %063_convolutional_bn_var[FLOAT, 1024]
  %063_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %064_convolutional_bn_scale[FLOAT, 512]
  %064_convolutional_bn_bias[FLOAT, 512]
  %064_convolutional_bn_mean[FLOAT, 512]
  %064_convolutional_bn_var[FLOAT, 512]
  %064_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %065_convolutional_bn_scale[FLOAT, 1024]
  %065_convolutional_bn_bias[FLOAT, 1024]
  %065_convolutional_bn_mean[FLOAT, 1024]
  %065_convolutional_bn_var[FLOAT, 1024]
  %065_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %067_convolutional_bn_scale[FLOAT, 512]
  %067_convolutional_bn_bias[FLOAT, 512]
  %067_convolutional_bn_mean[FLOAT, 512]
  %067_convolutional_bn_var[FLOAT, 512]
  %067_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %068_convolutional_bn_scale[FLOAT, 1024]
  %068_convolutional_bn_bias[FLOAT, 1024]
  %068_convolutional_bn_mean[FLOAT, 1024]
  %068_convolutional_bn_var[FLOAT, 1024]
  %068_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %070_convolutional_bn_scale[FLOAT, 512]
  %070_convolutional_bn_bias[FLOAT, 512]
  %070_convolutional_bn_mean[FLOAT, 512]
  %070_convolutional_bn_var[FLOAT, 512]
  %070_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %071_convolutional_bn_scale[FLOAT, 1024]
  %071_convolutional_bn_bias[FLOAT, 1024]
  %071_convolutional_bn_mean[FLOAT, 1024]
  %071_convolutional_bn_var[FLOAT, 1024]
  %071_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %073_convolutional_bn_scale[FLOAT, 512]
  %073_convolutional_bn_bias[FLOAT, 512]
  %073_convolutional_bn_mean[FLOAT, 512]
  %073_convolutional_bn_var[FLOAT, 512]
  %073_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %074_convolutional_bn_scale[FLOAT, 1024]
  %074_convolutional_bn_bias[FLOAT, 1024]
  %074_convolutional_bn_mean[FLOAT, 1024]
  %074_convolutional_bn_var[FLOAT, 1024]
  %074_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %076_convolutional_bn_scale[FLOAT, 512]
  %076_convolutional_bn_bias[FLOAT, 512]
  %076_convolutional_bn_mean[FLOAT, 512]
  %076_convolutional_bn_var[FLOAT, 512]
  %076_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %077_convolutional_bn_scale[FLOAT, 1024]
  %077_convolutional_bn_bias[FLOAT, 1024]
  %077_convolutional_bn_mean[FLOAT, 1024]
  %077_convolutional_bn_var[FLOAT, 1024]
  %077_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %078_convolutional_bn_scale[FLOAT, 512]
  %078_convolutional_bn_bias[FLOAT, 512]
  %078_convolutional_bn_mean[FLOAT, 512]
  %078_convolutional_bn_var[FLOAT, 512]
  %078_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %079_convolutional_bn_scale[FLOAT, 1024]
  %079_convolutional_bn_bias[FLOAT, 1024]
  %079_convolutional_bn_mean[FLOAT, 1024]
  %079_convolutional_bn_var[FLOAT, 1024]
  %079_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %080_convolutional_bn_scale[FLOAT, 512]
  %080_convolutional_bn_bias[FLOAT, 512]
  %080_convolutional_bn_mean[FLOAT, 512]
  %080_convolutional_bn_var[FLOAT, 512]
  %080_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %081_convolutional_bn_scale[FLOAT, 1024]
  %081_convolutional_bn_bias[FLOAT, 1024]
  %081_convolutional_bn_mean[FLOAT, 1024]
  %081_convolutional_bn_var[FLOAT, 1024]
  %081_convolutional_conv_weights[FLOAT, 1024x512x3x3]
  %082_convolutional_conv_bias[FLOAT, 255]
  %082_convolutional_conv_weights[FLOAT, 255x1024x1x1]
  %085_convolutional_bn_scale[FLOAT, 256]
  %085_convolutional_bn_bias[FLOAT, 256]
  %085_convolutional_bn_mean[FLOAT, 256]
  %085_convolutional_bn_var[FLOAT, 256]
  %085_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %086_upsample_scale[FLOAT, 4]
  %086_upsample_roi[FLOAT, 4]
  %088_convolutional_bn_scale[FLOAT, 256]
  %088_convolutional_bn_bias[FLOAT, 256]
  %088_convolutional_bn_mean[FLOAT, 256]
  %088_convolutional_bn_var[FLOAT, 256]
  %088_convolutional_conv_weights[FLOAT, 256x768x1x1]
  %089_convolutional_bn_scale[FLOAT, 512]
  %089_convolutional_bn_bias[FLOAT, 512]
  %089_convolutional_bn_mean[FLOAT, 512]
  %089_convolutional_bn_var[FLOAT, 512]
  %089_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %090_convolutional_bn_scale[FLOAT, 256]
  %090_convolutional_bn_bias[FLOAT, 256]
  %090_convolutional_bn_mean[FLOAT, 256]
  %090_convolutional_bn_var[FLOAT, 256]
  %090_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %091_convolutional_bn_scale[FLOAT, 512]
  %091_convolutional_bn_bias[FLOAT, 512]
  %091_convolutional_bn_mean[FLOAT, 512]
  %091_convolutional_bn_var[FLOAT, 512]
  %091_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %092_convolutional_bn_scale[FLOAT, 256]
  %092_convolutional_bn_bias[FLOAT, 256]
  %092_convolutional_bn_mean[FLOAT, 256]
  %092_convolutional_bn_var[FLOAT, 256]
  %092_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %093_convolutional_bn_scale[FLOAT, 512]
  %093_convolutional_bn_bias[FLOAT, 512]
  %093_convolutional_bn_mean[FLOAT, 512]
  %093_convolutional_bn_var[FLOAT, 512]
  %093_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %094_convolutional_conv_bias[FLOAT, 255]
  %094_convolutional_conv_weights[FLOAT, 255x512x1x1]
  %097_convolutional_bn_scale[FLOAT, 128]
  %097_convolutional_bn_bias[FLOAT, 128]
  %097_convolutional_bn_mean[FLOAT, 128]
  %097_convolutional_bn_var[FLOAT, 128]
  %097_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %098_upsample_scale[FLOAT, 4]
  %098_upsample_roi[FLOAT, 4]
  %100_convolutional_bn_scale[FLOAT, 128]
  %100_convolutional_bn_bias[FLOAT, 128]
  %100_convolutional_bn_mean[FLOAT, 128]
  %100_convolutional_bn_var[FLOAT, 128]
  %100_convolutional_conv_weights[FLOAT, 128x384x1x1]
  %101_convolutional_bn_scale[FLOAT, 256]
  %101_convolutional_bn_bias[FLOAT, 256]
  %101_convolutional_bn_mean[FLOAT, 256]
  %101_convolutional_bn_var[FLOAT, 256]
  %101_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %102_convolutional_bn_scale[FLOAT, 128]
  %102_convolutional_bn_bias[FLOAT, 128]
  %102_convolutional_bn_mean[FLOAT, 128]
  %102_convolutional_bn_var[FLOAT, 128]
  %102_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %103_convolutional_bn_scale[FLOAT, 256]
  %103_convolutional_bn_bias[FLOAT, 256]
  %103_convolutional_bn_mean[FLOAT, 256]
  %103_convolutional_bn_var[FLOAT, 256]
  %103_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %104_convolutional_bn_scale[FLOAT, 128]
  %104_convolutional_bn_bias[FLOAT, 128]
  %104_convolutional_bn_mean[FLOAT, 128]
  %104_convolutional_bn_var[FLOAT, 128]
  %104_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %105_convolutional_bn_scale[FLOAT, 256]
  %105_convolutional_bn_bias[FLOAT, 256]
  %105_convolutional_bn_mean[FLOAT, 256]
  %105_convolutional_bn_var[FLOAT, 256]
  %105_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %106_convolutional_conv_bias[FLOAT, 255]
  %106_convolutional_conv_weights[FLOAT, 255x256x1x1]
) {
  %001_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%000_net, %001_convolutional_conv_weights)
  %001_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%001_convolutional, %001_convolutional_bn_scale, %001_convolutional_bn_bias, %001_convolutional_bn_mean, %001_convolutional_bn_var)
  %001_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%001_convolutional_bn)
  %002_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%001_convolutional_lrelu, %002_convolutional_conv_weights)
  %002_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%002_convolutional, %002_convolutional_bn_scale, %002_convolutional_bn_bias, %002_convolutional_bn_mean, %002_convolutional_bn_var)
  %002_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%002_convolutional_bn)
  %003_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%002_convolutional_lrelu, %003_convolutional_conv_weights)
  %003_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%003_convolutional, %003_convolutional_bn_scale, %003_convolutional_bn_bias, %003_convolutional_bn_mean, %003_convolutional_bn_var)
  %003_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%003_convolutional_bn)
  %004_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%003_convolutional_lrelu, %004_convolutional_conv_weights)
  %004_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%004_convolutional, %004_convolutional_bn_scale, %004_convolutional_bn_bias, %004_convolutional_bn_mean, %004_convolutional_bn_var)
  %004_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%004_convolutional_bn)
  %005_shortcut = Add(%004_convolutional_lrelu, %002_convolutional_lrelu)
  %006_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%005_shortcut, %006_convolutional_conv_weights)
  %006_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%006_convolutional, %006_convolutional_bn_scale, %006_convolutional_bn_bias, %006_convolutional_bn_mean, %006_convolutional_bn_var)
  %006_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%006_convolutional_bn)
  %007_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%006_convolutional_lrelu, %007_convolutional_conv_weights)
  %007_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%007_convolutional, %007_convolutional_bn_scale, %007_convolutional_bn_bias, %007_convolutional_bn_mean, %007_convolutional_bn_var)
  %007_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%007_convolutional_bn)
  %008_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%007_convolutional_lrelu, %008_convolutional_conv_weights)
  %008_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%008_convolutional, %008_convolutional_bn_scale, %008_convolutional_bn_bias, %008_convolutional_bn_mean, %008_convolutional_bn_var)
  %008_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%008_convolutional_bn)
  %009_shortcut = Add(%008_convolutional_lrelu, %006_convolutional_lrelu)
  %010_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%009_shortcut, %010_convolutional_conv_weights)
  %010_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%010_convolutional, %010_convolutional_bn_scale, %010_convolutional_bn_bias, %010_convolutional_bn_mean, %010_convolutional_bn_var)
  %010_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%010_convolutional_bn)
  %011_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%010_convolutional_lrelu, %011_convolutional_conv_weights)
  %011_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%011_convolutional, %011_convolutional_bn_scale, %011_convolutional_bn_bias, %011_convolutional_bn_mean, %011_convolutional_bn_var)
  %011_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%011_convolutional_bn)
  %012_shortcut = Add(%011_convolutional_lrelu, %009_shortcut)
  %013_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%012_shortcut, %013_convolutional_conv_weights)
  %013_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%013_convolutional, %013_convolutional_bn_scale, %013_convolutional_bn_bias, %013_convolutional_bn_mean, %013_convolutional_bn_var)
  %013_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%013_convolutional_bn)
  %014_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%013_convolutional_lrelu, %014_convolutional_conv_weights)
  %014_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%014_convolutional, %014_convolutional_bn_scale, %014_convolutional_bn_bias, %014_convolutional_bn_mean, %014_convolutional_bn_var)
  %014_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%014_convolutional_bn)
  %015_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%014_convolutional_lrelu, %015_convolutional_conv_weights)
  %015_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%015_convolutional, %015_convolutional_bn_scale, %015_convolutional_bn_bias, %015_convolutional_bn_mean, %015_convolutional_bn_var)
  %015_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%015_convolutional_bn)
  %016_shortcut = Add(%015_convolutional_lrelu, %013_convolutional_lrelu)
  %017_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%016_shortcut, %017_convolutional_conv_weights)
  %017_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%017_convolutional, %017_convolutional_bn_scale, %017_convolutional_bn_bias, %017_convolutional_bn_mean, %017_convolutional_bn_var)
  %017_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%017_convolutional_bn)
  %018_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%017_convolutional_lrelu, %018_convolutional_conv_weights)
  %018_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%018_convolutional, %018_convolutional_bn_scale, %018_convolutional_bn_bias, %018_convolutional_bn_mean, %018_convolutional_bn_var)
  %018_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%018_convolutional_bn)
  %019_shortcut = Add(%018_convolutional_lrelu, %016_shortcut)
  %020_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%019_shortcut, %020_convolutional_conv_weights)
  %020_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%020_convolutional, %020_convolutional_bn_scale, %020_convolutional_bn_bias, %020_convolutional_bn_mean, %020_convolutional_bn_var)
  %020_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%020_convolutional_bn)
  %021_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%020_convolutional_lrelu, %021_convolutional_conv_weights)
  %021_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%021_convolutional, %021_convolutional_bn_scale, %021_convolutional_bn_bias, %021_convolutional_bn_mean, %021_convolutional_bn_var)
  %021_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%021_convolutional_bn)
  %022_shortcut = Add(%021_convolutional_lrelu, %019_shortcut)
  %023_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%022_shortcut, %023_convolutional_conv_weights)
  %023_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%023_convolutional, %023_convolutional_bn_scale, %023_convolutional_bn_bias, %023_convolutional_bn_mean, %023_convolutional_bn_var)
  %023_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%023_convolutional_bn)
  %024_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%023_convolutional_lrelu, %024_convolutional_conv_weights)
  %024_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%024_convolutional, %024_convolutional_bn_scale, %024_convolutional_bn_bias, %024_convolutional_bn_mean, %024_convolutional_bn_var)
  %024_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%024_convolutional_bn)
  %025_shortcut = Add(%024_convolutional_lrelu, %022_shortcut)
  %026_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%025_shortcut, %026_convolutional_conv_weights)
  %026_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%026_convolutional, %026_convolutional_bn_scale, %026_convolutional_bn_bias, %026_convolutional_bn_mean, %026_convolutional_bn_var)
  %026_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%026_convolutional_bn)
  %027_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%026_convolutional_lrelu, %027_convolutional_conv_weights)
  %027_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%027_convolutional, %027_convolutional_bn_scale, %027_convolutional_bn_bias, %027_convolutional_bn_mean, %027_convolutional_bn_var)
  %027_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%027_convolutional_bn)
  %028_shortcut = Add(%027_convolutional_lrelu, %025_shortcut)
  %029_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%028_shortcut, %029_convolutional_conv_weights)
  %029_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%029_convolutional, %029_convolutional_bn_scale, %029_convolutional_bn_bias, %029_convolutional_bn_mean, %029_convolutional_bn_var)
  %029_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%029_convolutional_bn)
  %030_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%029_convolutional_lrelu, %030_convolutional_conv_weights)
  %030_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%030_convolutional, %030_convolutional_bn_scale, %030_convolutional_bn_bias, %030_convolutional_bn_mean, %030_convolutional_bn_var)
  %030_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%030_convolutional_bn)
  %031_shortcut = Add(%030_convolutional_lrelu, %028_shortcut)
  %032_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%031_shortcut, %032_convolutional_conv_weights)
  %032_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%032_convolutional, %032_convolutional_bn_scale, %032_convolutional_bn_bias, %032_convolutional_bn_mean, %032_convolutional_bn_var)
  %032_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%032_convolutional_bn)
  %033_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%032_convolutional_lrelu, %033_convolutional_conv_weights)
  %033_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%033_convolutional, %033_convolutional_bn_scale, %033_convolutional_bn_bias, %033_convolutional_bn_mean, %033_convolutional_bn_var)
  %033_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%033_convolutional_bn)
  %034_shortcut = Add(%033_convolutional_lrelu, %031_shortcut)
  %035_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%034_shortcut, %035_convolutional_conv_weights)
  %035_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%035_convolutional, %035_convolutional_bn_scale, %035_convolutional_bn_bias, %035_convolutional_bn_mean, %035_convolutional_bn_var)
  %035_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%035_convolutional_bn)
  %036_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%035_convolutional_lrelu, %036_convolutional_conv_weights)
  %036_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%036_convolutional, %036_convolutional_bn_scale, %036_convolutional_bn_bias, %036_convolutional_bn_mean, %036_convolutional_bn_var)
  %036_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%036_convolutional_bn)
  %037_shortcut = Add(%036_convolutional_lrelu, %034_shortcut)
  %038_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%037_shortcut, %038_convolutional_conv_weights)
  %038_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%038_convolutional, %038_convolutional_bn_scale, %038_convolutional_bn_bias, %038_convolutional_bn_mean, %038_convolutional_bn_var)
  %038_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%038_convolutional_bn)
  %039_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%038_convolutional_lrelu, %039_convolutional_conv_weights)
  %039_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%039_convolutional, %039_convolutional_bn_scale, %039_convolutional_bn_bias, %039_convolutional_bn_mean, %039_convolutional_bn_var)
  %039_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%039_convolutional_bn)
  %040_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%039_convolutional_lrelu, %040_convolutional_conv_weights)
  %040_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%040_convolutional, %040_convolutional_bn_scale, %040_convolutional_bn_bias, %040_convolutional_bn_mean, %040_convolutional_bn_var)
  %040_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%040_convolutional_bn)
  %041_shortcut = Add(%040_convolutional_lrelu, %038_convolutional_lrelu)
  %042_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%041_shortcut, %042_convolutional_conv_weights)
  %042_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%042_convolutional, %042_convolutional_bn_scale, %042_convolutional_bn_bias, %042_convolutional_bn_mean, %042_convolutional_bn_var)
  %042_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%042_convolutional_bn)
  %043_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%042_convolutional_lrelu, %043_convolutional_conv_weights)
  %043_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%043_convolutional, %043_convolutional_bn_scale, %043_convolutional_bn_bias, %043_convolutional_bn_mean, %043_convolutional_bn_var)
  %043_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%043_convolutional_bn)
  %044_shortcut = Add(%043_convolutional_lrelu, %041_shortcut)
  %045_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%044_shortcut, %045_convolutional_conv_weights)
  %045_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%045_convolutional, %045_convolutional_bn_scale, %045_convolutional_bn_bias, %045_convolutional_bn_mean, %045_convolutional_bn_var)
  %045_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%045_convolutional_bn)
  %046_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%045_convolutional_lrelu, %046_convolutional_conv_weights)
  %046_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%046_convolutional, %046_convolutional_bn_scale, %046_convolutional_bn_bias, %046_convolutional_bn_mean, %046_convolutional_bn_var)
  %046_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%046_convolutional_bn)
  %047_shortcut = Add(%046_convolutional_lrelu, %044_shortcut)
  %048_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%047_shortcut, %048_convolutional_conv_weights)
  %048_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%048_convolutional, %048_convolutional_bn_scale, %048_convolutional_bn_bias, %048_convolutional_bn_mean, %048_convolutional_bn_var)
  %048_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%048_convolutional_bn)
  %049_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%048_convolutional_lrelu, %049_convolutional_conv_weights)
  %049_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%049_convolutional, %049_convolutional_bn_scale, %049_convolutional_bn_bias, %049_convolutional_bn_mean, %049_convolutional_bn_var)
  %049_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%049_convolutional_bn)
  %050_shortcut = Add(%049_convolutional_lrelu, %047_shortcut)
  %051_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%050_shortcut, %051_convolutional_conv_weights)
  %051_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%051_convolutional, %051_convolutional_bn_scale, %051_convolutional_bn_bias, %051_convolutional_bn_mean, %051_convolutional_bn_var)
  %051_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%051_convolutional_bn)
  %052_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%051_convolutional_lrelu, %052_convolutional_conv_weights)
  %052_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%052_convolutional, %052_convolutional_bn_scale, %052_convolutional_bn_bias, %052_convolutional_bn_mean, %052_convolutional_bn_var)
  %052_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%052_convolutional_bn)
  %053_shortcut = Add(%052_convolutional_lrelu, %050_shortcut)
  %054_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%053_shortcut, %054_convolutional_conv_weights)
  %054_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%054_convolutional, %054_convolutional_bn_scale, %054_convolutional_bn_bias, %054_convolutional_bn_mean, %054_convolutional_bn_var)
  %054_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%054_convolutional_bn)
  %055_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%054_convolutional_lrelu, %055_convolutional_conv_weights)
  %055_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%055_convolutional, %055_convolutional_bn_scale, %055_convolutional_bn_bias, %055_convolutional_bn_mean, %055_convolutional_bn_var)
  %055_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%055_convolutional_bn)
  %056_shortcut = Add(%055_convolutional_lrelu, %053_shortcut)
  %057_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%056_shortcut, %057_convolutional_conv_weights)
  %057_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%057_convolutional, %057_convolutional_bn_scale, %057_convolutional_bn_bias, %057_convolutional_bn_mean, %057_convolutional_bn_var)
  %057_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%057_convolutional_bn)
  %058_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%057_convolutional_lrelu, %058_convolutional_conv_weights)
  %058_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%058_convolutional, %058_convolutional_bn_scale, %058_convolutional_bn_bias, %058_convolutional_bn_mean, %058_convolutional_bn_var)
  %058_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%058_convolutional_bn)
  %059_shortcut = Add(%058_convolutional_lrelu, %056_shortcut)
  %060_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%059_shortcut, %060_convolutional_conv_weights)
  %060_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%060_convolutional, %060_convolutional_bn_scale, %060_convolutional_bn_bias, %060_convolutional_bn_mean, %060_convolutional_bn_var)
  %060_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%060_convolutional_bn)
  %061_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%060_convolutional_lrelu, %061_convolutional_conv_weights)
  %061_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%061_convolutional, %061_convolutional_bn_scale, %061_convolutional_bn_bias, %061_convolutional_bn_mean, %061_convolutional_bn_var)
  %061_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%061_convolutional_bn)
  %062_shortcut = Add(%061_convolutional_lrelu, %059_shortcut)
  %063_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%062_shortcut, %063_convolutional_conv_weights)
  %063_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%063_convolutional, %063_convolutional_bn_scale, %063_convolutional_bn_bias, %063_convolutional_bn_mean, %063_convolutional_bn_var)
  %063_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%063_convolutional_bn)
  %064_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%063_convolutional_lrelu, %064_convolutional_conv_weights)
  %064_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%064_convolutional, %064_convolutional_bn_scale, %064_convolutional_bn_bias, %064_convolutional_bn_mean, %064_convolutional_bn_var)
  %064_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%064_convolutional_bn)
  %065_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%064_convolutional_lrelu, %065_convolutional_conv_weights)
  %065_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%065_convolutional, %065_convolutional_bn_scale, %065_convolutional_bn_bias, %065_convolutional_bn_mean, %065_convolutional_bn_var)
  %065_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%065_convolutional_bn)
  %066_shortcut = Add(%065_convolutional_lrelu, %063_convolutional_lrelu)
  %067_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%066_shortcut, %067_convolutional_conv_weights)
  %067_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%067_convolutional, %067_convolutional_bn_scale, %067_convolutional_bn_bias, %067_convolutional_bn_mean, %067_convolutional_bn_var)
  %067_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%067_convolutional_bn)
  %068_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%067_convolutional_lrelu, %068_convolutional_conv_weights)
  %068_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%068_convolutional, %068_convolutional_bn_scale, %068_convolutional_bn_bias, %068_convolutional_bn_mean, %068_convolutional_bn_var)
  %068_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%068_convolutional_bn)
  %069_shortcut = Add(%068_convolutional_lrelu, %066_shortcut)
  %070_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%069_shortcut, %070_convolutional_conv_weights)
  %070_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%070_convolutional, %070_convolutional_bn_scale, %070_convolutional_bn_bias, %070_convolutional_bn_mean, %070_convolutional_bn_var)
  %070_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%070_convolutional_bn)
  %071_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%070_convolutional_lrelu, %071_convolutional_conv_weights)
  %071_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%071_convolutional, %071_convolutional_bn_scale, %071_convolutional_bn_bias, %071_convolutional_bn_mean, %071_convolutional_bn_var)
  %071_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%071_convolutional_bn)
  %072_shortcut = Add(%071_convolutional_lrelu, %069_shortcut)
  %073_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%072_shortcut, %073_convolutional_conv_weights)
  %073_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%073_convolutional, %073_convolutional_bn_scale, %073_convolutional_bn_bias, %073_convolutional_bn_mean, %073_convolutional_bn_var)
  %073_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%073_convolutional_bn)
  %074_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%073_convolutional_lrelu, %074_convolutional_conv_weights)
  %074_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%074_convolutional, %074_convolutional_bn_scale, %074_convolutional_bn_bias, %074_convolutional_bn_mean, %074_convolutional_bn_var)
  %074_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%074_convolutional_bn)
  %075_shortcut = Add(%074_convolutional_lrelu, %072_shortcut)
  %076_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%075_shortcut, %076_convolutional_conv_weights)
  %076_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%076_convolutional, %076_convolutional_bn_scale, %076_convolutional_bn_bias, %076_convolutional_bn_mean, %076_convolutional_bn_var)
  %076_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%076_convolutional_bn)
  %077_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%076_convolutional_lrelu, %077_convolutional_conv_weights)
  %077_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%077_convolutional, %077_convolutional_bn_scale, %077_convolutional_bn_bias, %077_convolutional_bn_mean, %077_convolutional_bn_var)
  %077_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%077_convolutional_bn)
  %078_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%077_convolutional_lrelu, %078_convolutional_conv_weights)
  %078_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%078_convolutional, %078_convolutional_bn_scale, %078_convolutional_bn_bias, %078_convolutional_bn_mean, %078_convolutional_bn_var)
  %078_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%078_convolutional_bn)
  %079_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%078_convolutional_lrelu, %079_convolutional_conv_weights)
  %079_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%079_convolutional, %079_convolutional_bn_scale, %079_convolutional_bn_bias, %079_convolutional_bn_mean, %079_convolutional_bn_var)
  %079_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%079_convolutional_bn)
  %080_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%079_convolutional_lrelu, %080_convolutional_conv_weights)
  %080_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%080_convolutional, %080_convolutional_bn_scale, %080_convolutional_bn_bias, %080_convolutional_bn_mean, %080_convolutional_bn_var)
  %080_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%080_convolutional_bn)
  %081_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%080_convolutional_lrelu, %081_convolutional_conv_weights)
  %081_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%081_convolutional, %081_convolutional_bn_scale, %081_convolutional_bn_bias, %081_convolutional_bn_mean, %081_convolutional_bn_var)
  %081_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%081_convolutional_bn)
  %082_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%081_convolutional_lrelu, %082_convolutional_conv_weights, %082_convolutional_conv_bias)
  %085_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%080_convolutional_lrelu, %085_convolutional_conv_weights)
  %085_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%085_convolutional, %085_convolutional_bn_scale, %085_convolutional_bn_bias, %085_convolutional_bn_mean, %085_convolutional_bn_var)
  %085_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%085_convolutional_bn)
  %086_upsample = Resize[coordinate_transformation_mode = u'asymmetric', mode = u'nearest', nearest_mode = u'floor'](%085_convolutional_lrelu, %086_upsample_roi, %086_upsample_scale)
  %087_route = Concat[axis = 1](%086_upsample, %062_shortcut)
  %088_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%087_route, %088_convolutional_conv_weights)
  %088_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%088_convolutional, %088_convolutional_bn_scale, %088_convolutional_bn_bias, %088_convolutional_bn_mean, %088_convolutional_bn_var)
  %088_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%088_convolutional_bn)
  %089_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%088_convolutional_lrelu, %089_convolutional_conv_weights)
  %089_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%089_convolutional, %089_convolutional_bn_scale, %089_convolutional_bn_bias, %089_convolutional_bn_mean, %089_convolutional_bn_var)
  %089_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%089_convolutional_bn)
  %090_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%089_convolutional_lrelu, %090_convolutional_conv_weights)
  %090_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%090_convolutional, %090_convolutional_bn_scale, %090_convolutional_bn_bias, %090_convolutional_bn_mean, %090_convolutional_bn_var)
  %090_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%090_convolutional_bn)
  %091_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%090_convolutional_lrelu, %091_convolutional_conv_weights)
  %091_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%091_convolutional, %091_convolutional_bn_scale, %091_convolutional_bn_bias, %091_convolutional_bn_mean, %091_convolutional_bn_var)
  %091_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%091_convolutional_bn)
  %092_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%091_convolutional_lrelu, %092_convolutional_conv_weights)
  %092_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%092_convolutional, %092_convolutional_bn_scale, %092_convolutional_bn_bias, %092_convolutional_bn_mean, %092_convolutional_bn_var)
  %092_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%092_convolutional_bn)
  %093_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%092_convolutional_lrelu, %093_convolutional_conv_weights)
  %093_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%093_convolutional, %093_convolutional_bn_scale, %093_convolutional_bn_bias, %093_convolutional_bn_mean, %093_convolutional_bn_var)
  %093_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%093_convolutional_bn)
  %094_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%093_convolutional_lrelu, %094_convolutional_conv_weights, %094_convolutional_conv_bias)
  %097_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%092_convolutional_lrelu, %097_convolutional_conv_weights)
  %097_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%097_convolutional, %097_convolutional_bn_scale, %097_convolutional_bn_bias, %097_convolutional_bn_mean, %097_convolutional_bn_var)
  %097_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%097_convolutional_bn)
  %098_upsample = Resize[coordinate_transformation_mode = u'asymmetric', mode = u'nearest', nearest_mode = u'floor'](%097_convolutional_lrelu, %098_upsample_roi, %098_upsample_scale)
  %099_route = Concat[axis = 1](%098_upsample, %037_shortcut)
  %100_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%099_route, %100_convolutional_conv_weights)
  %100_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%100_convolutional, %100_convolutional_bn_scale, %100_convolutional_bn_bias, %100_convolutional_bn_mean, %100_convolutional_bn_var)
  %100_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%100_convolutional_bn)
  %101_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%100_convolutional_lrelu, %101_convolutional_conv_weights)
  %101_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%101_convolutional, %101_convolutional_bn_scale, %101_convolutional_bn_bias, %101_convolutional_bn_mean, %101_convolutional_bn_var)
  %101_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%101_convolutional_bn)
  %102_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%101_convolutional_lrelu, %102_convolutional_conv_weights)
  %102_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%102_convolutional, %102_convolutional_bn_scale, %102_convolutional_bn_bias, %102_convolutional_bn_mean, %102_convolutional_bn_var)
  %102_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%102_convolutional_bn)
  %103_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%102_convolutional_lrelu, %103_convolutional_conv_weights)
  %103_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%103_convolutional, %103_convolutional_bn_scale, %103_convolutional_bn_bias, %103_convolutional_bn_mean, %103_convolutional_bn_var)
  %103_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%103_convolutional_bn)
  %104_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%103_convolutional_lrelu, %104_convolutional_conv_weights)
  %104_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%104_convolutional, %104_convolutional_bn_scale, %104_convolutional_bn_bias, %104_convolutional_bn_mean, %104_convolutional_bn_var)
  %104_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%104_convolutional_bn)
  %105_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%104_convolutional_lrelu, %105_convolutional_conv_weights)
  %105_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%105_convolutional, %105_convolutional_bn_scale, %105_convolutional_bn_bias, %105_convolutional_bn_mean, %105_convolutional_bn_var)
  %105_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%105_convolutional_bn)
  %106_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%105_convolutional_lrelu, %106_convolutional_conv_weights, %106_convolutional_conv_bias)
  return %082_convolutional, %094_convolutional, %106_convolutional
}

Then next command:

python3 onnx_to_tensorrt.py

Output:

Downloading from https://github.com/pjreddie/darknet/raw/f86901f6177dfc6116360a13cc06ab680e0c86b0/data/dog.jpg, this may take a while...
100% [............................................................................] 163759 / 163759
Loading ONNX file from path yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file yolov3.onnx; this may take a while...
Completed creating Engine
[TensorRT] WARNING: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
Running inference on image dog.jpg...
[[135.04631558 219.14286279 184.3172646  324.86085324]
 [ 98.95613542 135.56522746 499.10103538 299.16216674]
 [477.88944349  81.22835035 210.86733772  86.96320435]] [0.99852329 0.99881124 0.93929238] [16  1  7]
Saved image with bounding boxes of detected objects to dog_bboxes.png.

yolov3_to_onnx.py : This file must run in Python2.
The other file can run by other python version.

Cheers!

you could try this encapsulation of official tensorrt yolo implementation. no need convert the model ,just need the darkent model(.wieghts) https://github.com/enazoe/yolo-tensorrt

Hi NVES_R,

I wanna install the Nvidia docker, which better version of Nvidia docker would you suggest to install? (version 1 or version 2 ?)

Thanks !!

When I ran

python3 yolov3_to_onnx.py --model yolov3-416

I get

Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Traceback (most recent call last):
  File "yolov3_to_onnx.py", line 819, in <module>
    main()
  File "yolov3_to_onnx.py", line 807, in main
    verbose=True)
  File "yolov3_to_onnx.py", line 444, in build_onnx_graph
    params)
  File "yolov3_to_onnx.py", line 280, in load_upsample_scales
    name, TensorProto.FLOAT, shape, data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 173, in make_tensor
    getattr(tensor, field).extend(vals)
TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float

I did install

pip install onnx==1.4.1 --user

as suggested here https://devtalk.nvidia.com/default/topic/1052153/jetson-nano/tensorrt-backend-for-onnx-on-jetson-nano/post/5347666/#5347666

this is pycuda

Name: pycuda
Version: 2019.1.2
Summary: Python wrapper for Nvidia CUDA
Home-page: http://mathema.tician.de/software/pycuda
Author: Andreas Kloeckner
Author-email: inform@tiker.net
License: MIT
Location: /home/mysystem/.virtualenvs/myenv/lib/python3.6/site-packages
Requires: decorator, mako, appdirs, pytools
Required-by:

Any idea what is happening?

@santhosh.dc, I suspect it’s because you are using an older version of numpy. (I’m using numpy 1.17.4.)

Could you do a sudo pip3 install -U numpy and try again?

I do have numpy 1.17.4 too

This is my pip list

Package                 Version            
----------------------- -------------------
appdirs                 1.4.3              
apturl                  0.5.2              
asn1crypto              0.24.0             
Brlapi                  0.6.6              
certifi                 2018.1.18          
chardet                 3.0.4              
command-not-found       0.3                
cryptography            2.1.4              
cupshelpers             1.0                
cycler                  0.10.0             
decorator               4.4.1              
defer                   1.0.6              
distro-info             0.18ubuntu0.18.04.1
ffmpeg-python           0.2.0              
future                  0.18.2             
httplib2                0.9.2              
idna                    2.6                
joblib                  0.14.1             
keyring                 10.6.0             
keyrings.alt            3.0                
kiwisolver              1.1.0              
language-selector       0.1                
launchpadlib            1.10.6             
lazr.restfulclient      0.13.5             
lazr.uri                1.0.3              
louis                   3.5.0              
macaroonbakery          1.1.3              
Mako                    1.1.0              
MarkupSafe              1.1.1              
matplotlib              3.1.2              
netifaces               0.10.4             
numpy                   1.17.4             
oauth                   1.0.1              
olefile                 0.45.1             
onnx                    1.4.1              
opencv-contrib-python   4.1.2.30           
opencv-python           4.1.2.30           
pandas                  0.25.3             
pbr                     5.4.4              
pexpect                 4.2.1              
Pillow                  5.1.0              
pip                     19.3.1             
protobuf                3.0.0              
pycairo                 1.16.2             
pycrypto                2.6.1              
pycuda                  2019.1.2           
pycups                  1.9.73             
pygobject               3.26.1             
pymacaroons             0.13.0             
PyNaCl                  1.1.2              
pyparsing               2.4.5              
pyRFC3339               1.0                
python-apt              1.6.4              
python-dateutil         2.6.1              
python-debian           0.1.32             
pytools                 2019.1.1           
pytz                    2018.3             
pyxdg                   0.25               
PyYAML                  3.12               
reportlab               3.4.0              
requests                2.18.4             
requests-unixsocket     0.1.5              
scikit-learn            0.22               
scipy                   1.4.1              
screen-resolution-extra 0.0.0              
SecretStorage           2.3.1              
setuptools              42.0.2             
Shapely                 1.6.4.post2        
simplejson              3.13.2             
six                     1.13.0             
stevedore               1.31.0             
system-service          0.3                
systemd-python          234                
torch                   1.3.1              
torchvision             0.4.2              
typing                  3.7.4.1            
typing-extensions       3.7.4.1            
ubuntu-drivers-common   0.0.0              
ufw                     0.36               
unattended-upgrades     0.1                
urllib3                 1.22               
usb-creator             0.3.3              
virtualenv              16.7.9             
virtualenv-clone        0.5.3              
virtualenvwrapper       4.8.4              
wadllib                 1.3.2              
wheel                   0.33.6             
xkit                    0.0.0

Am still confused as to what exactly is going wrong here?

@santhosh.dc, The exact line which caused the exception is:

getattr(tensor, field).extend(vals)

What it does is to extend a “python list” with a “float32 numpy array”. It worked fine (x86 & Jetson) on all the platforms I’ve tested. So I don’t have a clue either…

Just to confirm the problem. Could you try if the following python code snippet runs OK on your system?

import numpy as np
a = np.array([1, 2, 3], dtype=np.float32)
b = []
b.extend(a)
print(b)  # expected output: [1.0, 2.0, 3.0]

Yup tested the snippet that you provided and I got the expected output

I tried printing the values that are being fed into the line 279 in the file yolov3_to_onnx.py

print('name ',name)
print('TensorProto.FLOAT ',TensorProto.FLOAT)
print('shape ',shape)
print('data ',data)
scale_init = helper.make_tensor(name, TensorProto.FLOAT, shape, data)

This is the output i got after running the file

Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
name  086_upsample_scale
TensorProto.FLOAT  1
shape  (4,)
data  [1. 1. 2. 2.]
Traceback (most recent call last):
  File "yolov3_to_onnx.py", line 823, in <module>
    main()
  File "yolov3_to_onnx.py", line 811, in main
    verbose=True)
  File "yolov3_to_onnx.py", line 448, in build_onnx_graph
    params)
  File "yolov3_to_onnx.py", line 284, in load_upsample_scales
    name, TensorProto.FLOAT, shape, data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 173, in make_tensor
    getattr(tensor, field).extend(vals)
TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float

Can you do basic debugging with pdb? What I would do is to fire up the program with pdb:

python3 yolov3_to_onnx.py --model yolov3-416

Use ‘c’ (continue) command to let the program run until the exception happens.

c

When it breaks out to pdb prompt again, check (print) the following variables:

p tensor
p name
p field
p tensor[field]
p vals

I did

python3 -m pdb yolov3_to_onnx.py --model yolov3-416

here’s where I print in helper.py

field = mapping.STORAGE_TENSOR_TYPE_TO_FIELD[
            mapping.TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE[data_type]]
print('tensor @@@@',tensor,' !!!!')
print('name @@@@',name,' !!!!')
print('field @@@@',field,' !!!!')
print('vals @@@@',vals,' !!!!')
print('tensor[field] @@@@',tensor[field],' !!!!')

Here’s the ouput

> /home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py(52)<module>()
-> from __future__ import print_function
(Pdb) c
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
tensor @@@@ data_type: 1
name: "001_convolutional_bn_bias"
  !!!!
name @@@@ 001_convolutional_bn_bias  !!!!
field @@@@ float_data  !!!!
vals @@@@ [-4.31688499 -0.75780761 -2.10980177  1.74026382  1.4071269  -3.09520531
 -0.38860837  0.75603795  1.98280501  1.28932226  0.652888    2.62636328
  2.30130816 -2.04826999 -3.73402262 -2.04675984  3.84553504 -1.04196978
 -0.30135924 -0.35420752 -3.53542829 -2.62854791  0.74821305  0.39179569
  2.36271548 -1.79072249  2.5973146  -0.34963462 -2.69729233 -2.68875289
  0.99702346 -0.20098142]  !!!!
Traceback (most recent call last):
  File "/usr/lib/python3.6/pdb.py", line 1667, in main
    pdb._runscript(mainpyfile)
  File "/usr/lib/python3.6/pdb.py", line 1548, in _runscript
    self.run(statement)
  File "/usr/lib/python3.6/bdb.py", line 434, in run
    exec(cmd, globals, locals)
  File "<string>", line 1, in <module>
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 52, in <module>
    from __future__ import print_function
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 811, in main
    verbose=True)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 443, in build_onnx_graph
    params)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 303, in load_conv_weights
    conv_params, 'bn', 'bias')
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 352, in _create_param_tensors
    param_name, TensorProto.FLOAT, param_data_shape, param_data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 177, in make_tensor
    print('tensor[field] @@@@',tensor[field],' !!!!')
TypeError: 'TensorProto' object is not subscriptable
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py(177)make_tensor()
-> print('tensor[field] @@@@',tensor[field],' !!!!')
(Pdb) exit()
Post mortem debugger finished. The yolov3_to_onnx.py will be restarted
> /home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py(52)<module>()
-> from __future__ import print_function
(Pdb) exit

Don’t add print() statement as you did. We are not interested in all calls to the make_tensor() function. We only care about when the exception happens: called by ‘load_upsample_scales’ (when the code is parsing a ‘upsample’ layer, not the ‘convolutional_bn_bias’ layers.

Once again,

python3 yolov3_to_onnx.py --model yolov3-416

Use ‘c’ (continue) command to let the program run until the exception happens.

c

When it breaks out to pdb prompt again, check (print) the following variables:

p tensor
p name
p field
p getattr(tensor, field)
p vals

Here’s the exact output

> /home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py(52)<module>()
-> from __future__ import print_function
(Pdb) c
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Traceback (most recent call last):
  File "/usr/lib/python3.6/pdb.py", line 1667, in main
    pdb._runscript(mainpyfile)
  File "/usr/lib/python3.6/pdb.py", line 1548, in _runscript
    self.run(statement)
  File "/usr/lib/python3.6/bdb.py", line 434, in run
    exec(cmd, globals, locals)
  File "<string>", line 1, in <module>
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 52, in <module>
    from __future__ import print_function
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 811, in main
    verbose=True)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 448, in build_onnx_graph
    params)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/yolov3_to_onnx.py", line 284, in load_upsample_scales
    name, TensorProto.FLOAT, shape, data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 178, in make_tensor
    getattr(tensor, field).extend(vals)
TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py(178)make_tensor()
-> getattr(tensor, field).extend(vals)
(Pdb) p tensor
data_type: 1
name: "086_upsample_scale"

(Pdb) p name
'086_upsample_scale'
(Pdb) p field
'float_data'
(Pdb) p getattr(tensor,field)
<google.protobuf.pyext._message.RepeatedScalarContainer object at 0x7f43f2dad5e0>
(Pdb) p vals
array([1., 1., 2., 2.], dtype=float32)

Could you check again the call trace (when the exception happens) is as you have posted previously?

Traceback (most recent call last):
  File "yolov3_to_onnx.py", line 819, in <module>
    main()
  File "yolov3_to_onnx.py", line 807, in main
    verbose=True)
  File "yolov3_to_onnx.py", line 444, in build_onnx_graph
    params)
  File "yolov3_to_onnx.py", line 280, in load_upsample_scales
    name, TensorProto.FLOAT, shape, data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 173, in make_tensor
    getattr(tensor, field).extend(vals)
TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float

‘load_upsample_scales()’ should not have been called for the “001_convolutional_bn_bias” layer. You could reference source code around these few lines: https://github.com/jkjung-avt/tensorrt_demos/blob/master/yolov3_onnx/yolov3_to_onnx.py#L443

I created a new copy of the file https://github.com/jkjung-avt/tensorrt_demos/blob/master/yolov3_onnx/yolov3_to_onnx.py

Ran it again got the same

> /home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py(52)<module>()
-> from __future__ import print_function
(Pdb) c
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Traceback (most recent call last):
  File "/usr/lib/python3.6/pdb.py", line 1667, in main
    pdb._runscript(mainpyfile)
  File "/usr/lib/python3.6/pdb.py", line 1548, in _runscript
    self.run(statement)
  File "/usr/lib/python3.6/bdb.py", line 434, in run
    exec(cmd, globals, locals)
  File "<string>", line 1, in <module>
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 52, in <module>
    from __future__ import print_function
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 807, in main
    verbose=True)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 444, in build_onnx_graph
    params)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 280, in load_upsample_scales
    name, TensorProto.FLOAT, shape, data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 178, in make_tensor
    getattr(tensor, field).extend(vals)
TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py(178)make_tensor()
-> getattr(tensor, field).extend(vals)
(Pdb) p tensor
data_type: 1
name: "086_upsample_scale"

(Pdb) p name
'086_upsample_scale'
(Pdb) p field
'float_data'
(Pdb) p getattr(tensor,field)
<google.protobuf.pyext._message.RepeatedScalarContainer object at 0x7ff25aabc538>
(Pdb) p vals
array([1., 1., 2., 2.], dtype=float32)
(Pdb) exit
Post mortem debugger finished. The nyolov3_to_onnx.py will be restarted
> /home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py(52)<module>()
-> from __future__ import print_function

I see. The problem is likely due to older version of ‘protobuf’. I’m using protobuf 3.8.0.

Could you update your protobuf to a recent version and try again?

Package                 Version            
----------------------- -------------------
......
protobuf                3.0.0              
......

Reference: https://devtalk.nvidia.com/default/topic/1052153/jetson-nano/tensorrt-backend-for-onnx-on-jetson-nano/post/5359608/#5359608

Tried to check what was happening in those lines that you highlighted

tried to print it out

for layer_name in self.param_dict.keys():
            _, layer_type = layer_name.split('_', 1)
            print(layer_name.split('_', 1))
            params = self.param_dict[layer_name]
            print(params)
            print('#####################')
            if layer_type == 'convolutional':
                initializer_layer, inputs_layer = weight_loader.load_conv_weights(
                    params)
                initializer.extend(initializer_layer)
                inputs.extend(inputs_layer)
            elif layer_type == 'upsample':
                initializer_layer, inputs_layer = weight_loader.load_upsample_scales(
                    params)
                initializer.extend(initializer_layer)
                inputs.extend(inputs_layer)

This is the output I got

(Pdb) 
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
['001', 'convolutional']
<__main__.ConvParams object at 0x7f1f2ce3cdd8>
#####################
['002', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14240>
#####################
['003', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14400>
#####################
['004', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc145c0>
#####################
['006', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14828>
#####################
['007', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14a20>
#####################
['008', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14ba8>
#####################
['010', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14e10>
#####################
['011', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc14fd0>
#####################
['013', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc19278>
#####################
['014', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc19438>
#####################
['015', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc195f8>
#####################
['017', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc19860>
#####################
['018', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc19a20>
#####################
['020', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc19c88>
#####################
['021', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc19e48>
#####################
['023', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc210f0>
#####################
['024', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc212b0>
#####################
['026', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc21518>
#####################
['027', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc216d8>
#####################
['029', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc21940>
#####################
['030', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc21b00>
#####################
['032', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc21d68>
#####################
['033', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc21f28>
#####################
['035', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc271d0>
#####################
['036', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc27390>
#####################
['038', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc275f8>
#####################
['039', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc277b8>
#####################
['040', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc27978>
#####################
['042', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc27be0>
#####################
['043', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc27da0>
#####################
['045', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2e048>
#####################
['046', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2e208>
#####################
['048', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2e470>
#####################
['049', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2e630>
#####################
['051', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2e898>
#####################
['052', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2ea58>
#####################
['054', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2ecc0>
#####################
['055', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc2ee80>
#####################
['057', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36128>
#####################
['058', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc362e8>
#####################
['060', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36550>
#####################
['061', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36710>
#####################
['063', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36978>
#####################
['064', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36b38>
#####################
['065', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36cf8>
#####################
['067', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc36f60>
#####################
['068', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3b160>
#####################
['070', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3b3c8>
#####################
['071', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3b588>
#####################
['073', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3b7f0>
#####################
['074', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3b9b0>
#####################
['076', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3bc18>
#####################
['077', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3bdd8>
#####################
['078', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc3bf98>
#####################
['079', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc42198>
#####################
['080', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc42358>
#####################
['081', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc42518>
#####################
['082', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc426d8>
#####################
['085', 'convolutional']
<__main__.ConvParams object at 0x7f1f2cc427b8>
#####################
['086', 'upsample']
<__main__.UpsampleParams object at 0x7f1f2cc426a0>
#####################
Traceback (most recent call last):
  File "/usr/lib/python3.6/pdb.py", line 1667, in main
    pdb._runscript(mainpyfile)
  File "/usr/lib/python3.6/pdb.py", line 1548, in _runscript
    self.run(statement)
  File "/usr/lib/python3.6/bdb.py", line 434, in run
    exec(cmd, globals, locals)
  File "<string>", line 1, in <module>
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 52, in <module>
    from __future__ import print_function
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 811, in main
    verbose=True)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 448, in build_onnx_graph
    params)
  File "/home/mysystem/tensorrt_demos/yolov3_onnx/nyolov3_to_onnx.py", line 280, in load_upsample_scales
    name, TensorProto.FLOAT, shape, data)
  File "/home/mysystem/.local/lib/python3.6/site-packages/onnx/helper.py", line 178, in make_tensor
    getattr(tensor, field).extend(vals)
TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program