TorchPoints3D PointNet2 Deploy to TensorRT with Custom Plugin

Description

Currently trying to export a model from the Torch Points 3D framework (PointNet2) to ONNX to then get to TensorRT where I can load and run inference in C++. There is an unsupported function three_interpolate that would require me to define it as a custom ONNX operator and then I would need to define it as a custom plugin in TensorRT. Originally I attempted to go into a simpler route of setting up Torch Points 3D in the Jetson Xavier but had dependency issues that lead me to consider this as the main option.

Would it be possible to implement this as a custom plugin (three interpolate related functions)?

This is the original error when trying to export:

/venv/lib/python3.8/site-packages/MinkowskiEngine/__init__.py:36: UserWarning: The environment variable `OMP_NUM_THREADS` not set. MinkowskiEngine will automatically set `OMP_NUM_THREADS=16`. If you want to set `OMP_NUM_THREADS` manually, please export it on the command line before running a python script. e.g. `export OMP_NUM_THREADS=12; python your_program.py`. It is recommended to set it below 24.
  warnings.warn(
/venv/lib/python3.8/site-packages/hydra/core/utils.py:214: UserWarning: 
Using config_path to specify the config name is deprecated, specify the config name via config_name
See https://hydra.cc/docs/next/upgrades/0.11_to_1.0/config_path_changes
  warnings.warn(category=UserWarning, message=msg)
[2021-11-30 13:19:59,226][__main__][INFO] - DEVICE : cuda
[2021-11-30 13:19:59,226][torch_points3d.metrics.model_checkpoint][INFO] - Loading checkpoint from /models/pointnet2_charlesssg_GY4_wo_normals_2021-11-15_14-35-40.pt
DATASET PROPS:  {'feature_dimension': 0, 'num_classes': 5, 'class_to_segments': {'hotstab': [1], 'wetmate': [2], 'manifold': [3], 'kettlebell': [4]}}
DATA CONFIG:  {'class': 'graveyard4.graveyard4Dataset', 'task': 'segmentation', 'dataroot': 'data', 'normal': False, 'first_subsampling': 0.02, 'use_category': True, 'pre_transforms': [{'transform': 'NormalizeScale'}, {'transform': 'GridSampling3D', 'params': {'size': '${data.first_subsampling}'}}], 'train_transforms': [{'transform': 'FixedPoints', 'lparams': [32768]}, {'transform': 'RandomNoise', 'params': {'sigma': 0.01, 'clip': 0.05}}], 'test_transforms': [{'transform': 'FixedPoints', 'lparams': [32768]}], 'val_transforms': '${data.test_transforms}'}
[2021-11-30 13:19:59,436][torch_points3d.models.segmentation.pointnet2][INFO] - Using category information for the predictions with 4 categories
[2021-11-30 13:19:59,474][torch_points3d.metrics.model_checkpoint][INFO] - Available weights : ['latest', 'loss_seg', 'acc', 'macc', 'miou']
[2021-11-30 13:19:59,474][torch_points3d.metrics.model_checkpoint][INFO] - Model loaded from pointnet2_charlesssg_GY4_wo_normals_2021-11-15_14-35-40.pt:best_miou.
[2021-11-30 13:19:59,515][torch_points3d.core.schedulers.bn_schedulers][INFO] - Setting batchnorm momentum at 0.1
[2021-11-30 13:19:59,515][__main__][INFO] - PointNet2_D(
  (model): UnetSkipConnectionBlock(
    (down): PointNetMSGDown(
      (mlps): ModuleList(
        (0): MLP2D(
          (0): Conv2D(
            (0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
          (1): Conv2D(
            (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
          (2): Conv2D(
            (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
        )
      )
    )
    (submodule): UnetSkipConnectionBlock(
      (down): PointNetMSGDown(
        (mlps): ModuleList(
          (0): MLP2D(
            (0): Conv2D(
              (0): Conv2d(131, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): LeakyReLU(negative_slope=0.01)
            )
            (1): Conv2D(
              (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): LeakyReLU(negative_slope=0.01)
            )
            (2): Conv2D(
              (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): LeakyReLU(negative_slope=0.01)
            )
          )
        )
      )
      (submodule): UnetSkipConnectionBlock(
        (inner): GlobalDenseBaseModule: 725248 (aggr=max, MLP2D(
          (0): Conv2D(
            (0): Conv2d(259, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
          (1): Conv2D(
            (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
          (2): Conv2D(
            (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
        ))
        (up): DenseFPModule: 394240 (MLP2D(
          (0): Conv2D(
            (0): Conv2d(1280, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
          (1): Conv2D(
            (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (2): LeakyReLU(negative_slope=0.01)
          )
        ))
      )
      (up): DenseFPModule: 131840 (MLP2D(
        (0): Conv2D(
          (0): Conv2d(384, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): LeakyReLU(negative_slope=0.01)
        )
        (1): Conv2D(
          (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): LeakyReLU(negative_slope=0.01)
        )
      ))
    )
    (up): DenseFPModule: 49920 (MLP2D(
      (0): Conv2D(
        (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.01)
      )
      (1): Conv2D(
        (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.01)
      )
      (2): Conv2D(
        (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.01)
      )
    ))
  )
  (FC_layer): Seq(
    (0): Conv1D(
      (0): Conv1d(132, 128, kernel_size=(1,), stride=(1,), bias=False)
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): LeakyReLU(negative_slope=0.01)
    )
    (1): Dropout(p=0.5, inplace=False)
    (2): Conv1D(
      (0): Conv1d(128, 5, kernel_size=(1,), stride=(1,))
    )
  )
)
[2021-11-30 13:19:59,518][__main__][INFO] - Model size = 1398981
[2021-11-30 13:19:59,566][__main__][INFO] - Dataset: graveyard4Dataset 
e[0;95mtrain_pre_batch_collate_transform e[0m= None
e[0;95mval_pre_batch_collate_transform e[0m= None
e[0;95mtest_pre_batch_collate_transform e[0m= None
e[0;95mpre_transform e[0m= Compose([
    NormalizeScale(),
    GridSampling3D(grid_size=0.02, quantize_coords=False, mode=mean),
])
e[0;95mtest_transform e[0m= Compose([
    FixedPoints(32768, replace=True),
])
e[0;95mtrain_transform e[0m= Compose([
    FixedPoints(32768, replace=True),
    RandomNoise(sigma=0.01, clip=0.05),
])
e[0;95mval_transform e[0m= Compose([
    FixedPoints(32768, replace=True),
])
e[0;95minference_transform e[0m= Compose([
    NormalizeScale(),
    GridSampling3D(grid_size=0.02, quantize_coords=False, mode=mean),
    FixedPoints(32768, replace=True),
])
Size of e[0;95mtrain_dataset e[0m= 4
Size of e[0;95mtest_dataset e[0m= 4
Size of e[0;95mval_dataset e[0m= 4
e[0;95mBatch size =e[0m 16
  0%|          | 0/1 [00:00<?, ?it/s]DIST SHAPE:  torch.Size([4, 512, 3])
IDX SHAPE:  torch.Size([4, 512, 3])
DIST SHAPE:  torch.Size([4, 32768, 3])
IDX SHAPE:  torch.Size([4, 32768, 3])
graph(%pos : Float(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0),
      %category : Long(4:32768, 32768:1, requires_grad=0, device=cuda:0),
      %FC_layer.2.0.weight : Float(5:128, 128:1, 1:1, requires_grad=1, device=cuda:0),
      %FC_layer.2.0.bias : Float(5:1, requires_grad=1, device=cuda:0),
      %402 : Float(64:3, 3:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %403 : Float(64:1, requires_grad=0, device=cuda:0),
      %405 : Float(64:64, 64:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %406 : Float(64:1, requires_grad=0, device=cuda:0),
      %408 : Float(128:64, 64:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %409 : Float(128:1, requires_grad=0, device=cuda:0),
      %411 : Float(128:131, 131:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %412 : Float(128:1, requires_grad=0, device=cuda:0),
      %414 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %415 : Float(128:1, requires_grad=0, device=cuda:0),
      %417 : Float(256:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %418 : Float(256:1, requires_grad=0, device=cuda:0),
      %420 : Float(256:259, 259:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %421 : Float(256:1, requires_grad=0, device=cuda:0),
      %423 : Float(512:256, 256:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %424 : Float(512:1, requires_grad=0, device=cuda:0),
      %426 : Float(1024:512, 512:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %427 : Float(1024:1, requires_grad=0, device=cuda:0),
      %429 : Float(256:1280, 1280:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %430 : Float(256:1, requires_grad=0, device=cuda:0),
      %432 : Float(256:256, 256:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %433 : Float(256:1, requires_grad=0, device=cuda:0),
      %435 : Float(256:384, 384:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %436 : Float(256:1, requires_grad=0, device=cuda:0),
      %438 : Float(128:256, 256:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %439 : Float(128:1, requires_grad=0, device=cuda:0),
      %441 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %442 : Float(128:1, requires_grad=0, device=cuda:0),
      %444 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %445 : Float(128:1, requires_grad=0, device=cuda:0),
      %447 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %448 : Float(128:1, requires_grad=0, device=cuda:0),
      %450 : Float(128:132, 132:1, 1:1, requires_grad=0, device=cuda:0),
      %451 : Float(128:1, requires_grad=0, device=cuda:0),
      %452 : Long(1:1, requires_grad=0, device=cpu),
      %453 : Long(1:1, requires_grad=0, device=cpu),
      %454 : Long(1:1, requires_grad=0, device=cpu),
      %455 : Long(1:1, requires_grad=0, device=cpu),
      %456 : Long(1:1, requires_grad=0, device=cpu),
      %457 : Long(1:1, requires_grad=0, device=cpu),
      %458 : Long(1:1, requires_grad=0, device=cpu),
      %459 : Long(1:1, requires_grad=0, device=cpu),
      %460 : Long(1:1, requires_grad=0, device=cpu),
      %461 : Long(1:1, requires_grad=0, device=cpu),
      %462 : Long(1:1, requires_grad=0, device=cpu),
      %463 : Long(1:1, requires_grad=0, device=cpu),
      %464 : Long(1:1, requires_grad=0, device=cpu),
      %465 : Long(1:1, requires_grad=0, device=cpu),
      %466 : Long(1:1, requires_grad=0, device=cpu),
      %467 : Long(1:1, requires_grad=0, device=cpu),
      %468 : Long(1:1, requires_grad=0, device=cpu),
      %469 : Long(1:1, requires_grad=0, device=cpu),
      %470 : Long(1:1, requires_grad=0, device=cpu),
      %471 : Long(1:1, requires_grad=0, device=cpu),
      %472 : Long(1:1, requires_grad=0, device=cpu),
      %473 : Long(1:1, requires_grad=0, device=cpu),
      %474 : Long(1:1, requires_grad=0, device=cpu),
      %475 : Long(1:1, requires_grad=0, device=cpu),
      %476 : Long(1:1, requires_grad=0, device=cpu),
      %477 : Long(1:1, requires_grad=0, device=cpu),
      %478 : Long(1:1, requires_grad=0, device=cpu),
      %479 : Long(1:1, requires_grad=0, device=cpu),
      %480 : Long(1:1, requires_grad=0, device=cpu)):
  %107 : Float(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=1](%pos) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:152:0
  %108 : Long(4:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=7](%category) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:153:0
  %109 : Int(4:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %110 : Tensor = onnx::Shape(%107)
  %111 : Tensor = onnx::Constant[value={2}]()
  %112 : Long(device=cpu) = onnx::Gather[axis=0](%110, %111) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %117 : Tensor = onnx::Unsqueeze[axes=[0]](%112)
  %118 : Tensor = onnx::Concat[axis=0](%452, %453, %117)
  %121 : Tensor = onnx::Unsqueeze[axes=[0]](%112)
  %122 : Tensor = onnx::Concat[axis=0](%454, %455, %121)
  %123 : Tensor = onnx::Shape(%118)
  %124 : Tensor = onnx::ConstantOfShape[value={1}](%123)
  %125 : Tensor = onnx::Expand(%109, %124)
  %126 : Int(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Tile(%125, %122) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %127 : Long(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=7](%126) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %128 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %129 : Tensor = onnx::Constant[value={1}]()
  %130 : Tensor = onnx::Shape(%107)
  %131 : Tensor = onnx::Gather[axis=0](%130, %129)
  %132 : Tensor = onnx::OneHot[axis=1](%127, %131, %128)
  %133 : Tensor = onnx::Cast[to=1](%132)
  %134 : Tensor = onnx::Unsqueeze[axes=[2]](%107)
  %135 : Tensor = onnx::Mul(%134, %133)
  %136 : Float(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[1], keepdims=0](%135) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:82:0
  %137 : Float(4:98304, 3:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%107) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:37:0
  %138 : Long(4:32768, 1:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %139 : Tensor = onnx::Shape(%137)
  %140 : Tensor = onnx::Constant[value={1}]()
  %141 : Long(device=cpu) = onnx::Gather[axis=0](%139, %140) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %145 : Tensor = onnx::Unsqueeze[axes=[0]](%141)
  %147 : Tensor = onnx::Concat[axis=0](%456, %145, %457)
  %149 : Tensor = onnx::Unsqueeze[axes=[0]](%141)
  %151 : Tensor = onnx::Concat[axis=0](%458, %149, %459)
  %152 : Tensor = onnx::Shape(%147)
  %153 : Tensor = onnx::ConstantOfShape[value={1}](%152)
  %154 : Tensor = onnx::Expand(%138, %153)
  %155 : Long(4:98304, 3:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Tile(%154, %151) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %156 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %157 : Tensor = onnx::Constant[value={2}]()
  %158 : Tensor = onnx::Shape(%137)
  %159 : Tensor = onnx::Gather[axis=0](%158, %157)
  %160 : Tensor = onnx::OneHot[axis=2](%155, %159, %156)
  %161 : Tensor = onnx::Cast[to=1](%160)
  %162 : Tensor = onnx::Unsqueeze[axes=[3]](%137)
  %163 : Tensor = onnx::Mul(%162, %161)
  %164 : Float(4:98304, 3:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[2], keepdims=0](%163) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:145:0
  %166 : Tensor = onnx::Shape(%137)
  %167 : Tensor = onnx::Constant[value={1}]()
  %168 : Long(device=cpu) = onnx::Gather[axis=0](%166, %167) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %172 : Tensor = onnx::Unsqueeze[axes=[0]](%168)
  %175 : Tensor = onnx::Concat[axis=0](%460, %172, %461, %462)
  %176 : Float(4:98304, 3:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%164, %175) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %177 : Float(4:1536, 3:1, 512:3, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%136) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %178 : Float(4:1536, 3:1, 512:3, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%177) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %179 : Float(4:98304, 3:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Sub(%176, %178)
  %401 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%179, %402, %403)
  %182 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%401) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %404 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%182, %405, %406)
  %185 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%404) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %407 : Float(4:4194304, 128:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%185, %408, %409)
  %188 : Float(4:4194304, 128:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%407) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %189 : Float(4:65536, 128:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::MaxPool[kernel_shape=[1, 64], pads=[0, 0, 0, 0], strides=[1, 64]](%188) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:585:0
  %190 : Float(4:65536, 128:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%189) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:74:0
  %191 : Float(4:65536, 128:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%190) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:88:0
  %192 : Int(4:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %193 : Tensor = onnx::Shape(%136)
  %194 : Tensor = onnx::Constant[value={2}]()
  %195 : Long(device=cpu) = onnx::Gather[axis=0](%193, %194) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %200 : Tensor = onnx::Unsqueeze[axes=[0]](%195)
  %201 : Tensor = onnx::Concat[axis=0](%463, %464, %200)
  %204 : Tensor = onnx::Unsqueeze[axes=[0]](%195)
  %205 : Tensor = onnx::Concat[axis=0](%465, %466, %204)
  %206 : Tensor = onnx::Shape(%201)
  %207 : Tensor = onnx::ConstantOfShape[value={1}](%206)
  %208 : Tensor = onnx::Expand(%192, %207)
  %209 : Int(4:384, 128:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Tile(%208, %205) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %210 : Long(4:384, 128:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=7](%209) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %211 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %212 : Tensor = onnx::Constant[value={1}]()
  %213 : Tensor = onnx::Shape(%136)
  %214 : Tensor = onnx::Gather[axis=0](%213, %212)
  %215 : Tensor = onnx::OneHot[axis=1](%210, %214, %211)
  %216 : Tensor = onnx::Cast[to=1](%215)
  %217 : Tensor = onnx::Unsqueeze[axes=[2]](%136)
  %218 : Tensor = onnx::Mul(%217, %216)
  %219 : Float(4:384, 128:3, 3:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[1], keepdims=0](%218) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:82:0
  %220 : Float(4:1536, 3:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%136) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:37:0
  %221 : Long(4:8192, 1:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %222 : Tensor = onnx::Shape(%220)
  %223 : Tensor = onnx::Constant[value={1}]()
  %224 : Long(device=cpu) = onnx::Gather[axis=0](%222, %223) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %228 : Tensor = onnx::Unsqueeze[axes=[0]](%224)
  %230 : Tensor = onnx::Concat[axis=0](%467, %228, %468)
  %232 : Tensor = onnx::Unsqueeze[axes=[0]](%224)
  %234 : Tensor = onnx::Concat[axis=0](%469, %232, %470)
  %235 : Tensor = onnx::Shape(%230)
  %236 : Tensor = onnx::ConstantOfShape[value={1}](%235)
  %237 : Tensor = onnx::Expand(%221, %236)
  %238 : Long(4:24576, 3:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Tile(%237, %234) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %239 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %240 : Tensor = onnx::Constant[value={2}]()
  %241 : Tensor = onnx::Shape(%220)
  %242 : Tensor = onnx::Gather[axis=0](%241, %240)
  %243 : Tensor = onnx::OneHot[axis=2](%238, %242, %239)
  %244 : Tensor = onnx::Cast[to=1](%243)
  %245 : Tensor = onnx::Unsqueeze[axes=[3]](%220)
  %246 : Tensor = onnx::Mul(%245, %244)
  %247 : Float(4:24576, 3:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[2], keepdims=0](%246) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:145:0
  %249 : Tensor = onnx::Shape(%220)
  %250 : Tensor = onnx::Constant[value={1}]()
  %251 : Long(device=cpu) = onnx::Gather[axis=0](%249, %250) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %255 : Tensor = onnx::Unsqueeze[axes=[0]](%251)
  %258 : Tensor = onnx::Concat[axis=0](%471, %255, %472, %473)
  %259 : Float(4:24576, 3:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%247, %258) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %260 : Float(4:384, 3:1, 128:3, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%219) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %261 : Float(4:384, 3:1, 128:3, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%260) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %262 : Float(4:24576, 3:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Sub(%259, %261)
  %263 : Long(4:8192, 1:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %264 : Tensor = onnx::Shape(%191)
  %265 : Tensor = onnx::Constant[value={1}]()
  %266 : Long(device=cpu) = onnx::Gather[axis=0](%264, %265) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %270 : Tensor = onnx::Unsqueeze[axes=[0]](%266)
  %272 : Tensor = onnx::Concat[axis=0](%474, %270, %475)
  %274 : Tensor = onnx::Unsqueeze[axes=[0]](%266)
  %276 : Tensor = onnx::Concat[axis=0](%476, %274, %477)
  %277 : Tensor = onnx::Shape(%272)
  %278 : Tensor = onnx::ConstantOfShape[value={1}](%277)
  %279 : Tensor = onnx::Expand(%263, %278)
  %280 : Long(4:1048576, 128:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Tile(%279, %276) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %281 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %282 : Tensor = onnx::Constant[value={2}]()
  %283 : Tensor = onnx::Shape(%191)
  %284 : Tensor = onnx::Gather[axis=0](%283, %282)
  %285 : Tensor = onnx::OneHot[axis=2](%280, %284, %281)
  %286 : Tensor = onnx::Cast[to=1](%285)
  %287 : Tensor = onnx::Unsqueeze[axes=[3]](%191)
  %288 : Tensor = onnx::Mul(%287, %286)
  %289 : Float(4:1048576, 128:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[2], keepdims=0](%288) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:145:0
  %291 : Tensor = onnx::Shape(%191)
  %292 : Tensor = onnx::Constant[value={1}]()
  %293 : Long(device=cpu) = onnx::Gather[axis=0](%291, %292) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %297 : Tensor = onnx::Unsqueeze[axes=[0]](%293)
  %300 : Tensor = onnx::Concat[axis=0](%478, %297, %479, %480)
  %301 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%289, %300) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %302 : Float(4:1073152, 131:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%262, %301) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:47:0
  %410 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%302, %411, %412)
  %305 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%410) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %413 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%305, %414, %415)
  %308 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%413) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %416 : Float(4:2097152, 256:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%308, %417, %418)
  %311 : Float(4:2097152, 256:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%416) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %312 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::MaxPool[kernel_shape=[1, 64], pads=[0, 0, 0, 0], strides=[1, 64]](%311) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:585:0
  %313 : Float(4:32768, 256:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%312) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:74:0
  %314 : Float(4:32768, 256:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%313) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:88:0
  %315 : Float(4:384, 3:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%219) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:182:0
  %316 : Float(4:33152, 259:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%314, %315) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:184:0
  %317 : Float(4:33152, 259:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%316) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:184:0
  %419 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%317, %420, %421)
  %320 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%419) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %422 : Float(4:65536, 512:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%320, %423, %424)
  %323 : Float(4:65536, 512:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%422) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %425 : Float(4:131072, 1024:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%323, %426, %427)
  %326 : Float(4:131072, 1024:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%425) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %327 : Float(4:131072, 1024:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%326) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:187:0
  %328 : Float(4:1024, 1024:1, requires_grad=0, device=cuda:0) = onnx::ReduceMax[axes=[-1], keepdims=0](%327) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:187:0
  %329 : Float(4:1024, 1024:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[2]](%328) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:194:0
  %330 : Tensor = onnx::Shape(%329)
  %331 : Tensor = onnx::Constant[value={0}]()
  %332 : Long(device=cpu) = onnx::Gather[axis=0](%330, %331) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %333 : Tensor = onnx::Shape(%329)
  %334 : Tensor = onnx::Constant[value={1}]()
  %335 : Long(device=cpu) = onnx::Gather[axis=0](%333, %334) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %336 : Tensor = onnx::Shape(%219)
  %337 : Tensor = onnx::Constant[value={1}]()
  %338 : Long(device=cpu) = onnx::Gather[axis=0](%336, %337) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %339 : Tensor = onnx::Unsqueeze[axes=[0]](%332)
  %340 : Tensor = onnx::Unsqueeze[axes=[0]](%335)
  %341 : Tensor = onnx::Unsqueeze[axes=[0]](%338)
  %342 : Tensor = onnx::Concat[axis=0](%339, %340, %341)
  %343 : Tensor = onnx::Constant[value={-1}]()
  %344 : Tensor = onnx::Reshape(%342, %343)
  %345 : Tensor = onnx::Shape(%344)
  %346 : Tensor = onnx::ConstantOfShape[value={1}](%345)
  %347 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={-1}]()
  %348 : LongTensor = onnx::Mul(%346, %347)
  %349 : Tensor = onnx::Equal(%344, %348)
  %350 : Tensor = onnx::Cast[to=9](%349)
  %351 : Tensor = onnx::Where(%350, %346, %344)
  %352 : Float(4:1024, 1024:1, 128:0, requires_grad=0, device=cuda:0) = onnx::Expand(%329, %351) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %353 : Float(4:163840, 1280:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%352, %314) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:122:0
  %354 : Float(4:163840, 1280:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%353) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:124:0
  %428 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%354, %429, %430)
  %357 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%428) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %431 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%357, %432, %433)
  %360 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%431) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %361 : Float(4:32768, 256:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%360) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:129:0
  %362 : Int(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %363 : Float(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %364 : Float(4:131072, 256:512, 512:1, requires_grad=0, device=cuda:0) = ^ThreeInterpolate()(%361, %362, %363) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:126:0
  %365 : Float(4:196608, 384:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%364, %191) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:122:0
  %366 : Float(4:196608, 384:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%365) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:124:0
  %434 : Float(4:131072, 256:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%366, %435, %436)
  %369 : Float(4:131072, 256:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%434) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %437 : Float(4:65536, 128:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%369, %438, %439)
  %372 : Float(4:65536, 128:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%437) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %373 : Float(4:65536, 128:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%372) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:129:0
  %374 : Int(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %375 : Float(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %376 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = ^ThreeInterpolate()(%373, %374, %375) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:126:0
  %377 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%376) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:124:0
  %440 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%377, %441, %442)
  %380 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%440) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %443 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%380, %444, %445)
  %383 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%443) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %446 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%383, %447, %448)
  %386 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%446) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %387 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%386) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:129:0
  %388 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %389 : Tensor = onnx::Constant[value={4}]()
  %390 : Long(4:131072, 32768:4, 4:1, requires_grad=0, device=cuda:0) = onnx::OneHot[axis=-1](%108, %389, %388) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:236:0
  %391 : Float(4:131072, 32768:4, 4:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=1](%390) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:236:0
  %392 : Float(4:131072, 4:1, 32768:4, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%391) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:236:0
  %393 : Float(4:4325376, 132:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%387, %392) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:237:0
  %449 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%393, %450, %451)
  %396 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%449) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:983:0
  %397 : Float(4:163840, 5:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%396, %FC_layer.2.0.weight, %FC_layer.2.0.bias) # /venv/lib/python3.8/site-packages/torch/nn/modules/conv.py:258:0
  %398 : Float(4:163840, 32768:5, 5:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%397) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:239:0
  %399 : Tensor = onnx::Constant[value=-1  5 [ CPULongType{2} ]]()
  %output_labels : Float(131072:5, 5:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%398, %399) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:239:0
  return (%output_labels)
/venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:1703: UserWarning: ONNX export unsqueeze with negative axis -1 might cause the onnx model to be incorrect. Negative axis is not supported in ONNX. Axis is converted to 3 based on input shape at export time. Passing an tensor of different rank in execution will be incorrect.
  warnings.warn("ONNX export unsqueeze with negative axis " + str(dim) +
/venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:572: UserWarning: ONNX export squeeze with negative axis -1 might cause the onnx model to be incorrect. Negative axis is not supported in ONNX. Axis is converted to 3 based on input shape at export time. Passing an tensor of different rank in execution will be incorrect.
  warnings.warn("ONNX export squeeze with negative axis " + str(squeeze_dim) +
/venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:598: UserWarning: This model contains a squeeze operation on dimension 3. If the model is intended to be used with dynamic input shapes, please use opset version 11 to export the model.
  warnings.warn("This model contains a squeeze operation on dimension " + str(squeeze_dim) + ". If the model is " +
/venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:1703: UserWarning: ONNX export unsqueeze with negative axis -1 might cause the onnx model to be incorrect. Negative axis is not supported in ONNX. Axis is converted to 2 based on input shape at export time. Passing an tensor of different rank in execution will be incorrect.
  warnings.warn("ONNX export unsqueeze with negative axis " + str(dim) +
  0%|          | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "forward_scripts/checkpoint_export2.py", line 137, in <module>
    main()
  File "/venv/lib/python3.8/site-packages/hydra/main.py", line 32, in decorated_main
    _run_hydra(
  File "/venv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 346, in _run_hydra
    run_and_report(
  File "/venv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 201, in run_and_report
    raise ex
  File "/venv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 198, in run_and_report
    return func()
  File "/venv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 347, in <lambda>
    lambda: hydra.run(
  File "/venv/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 107, in run
    return run_job(
  File "/venv/lib/python3.8/site-packages/hydra/core/utils.py", line 129, in run_job
    ret.return_value = task_function(task_cfg)
  File "forward_scripts/checkpoint_export2.py", line 133, in main
    run(model, dataset, device, cfg.output_path)
  File "forward_scripts/checkpoint_export2.py", line 66, in run
    torch.onnx.export(model,
  File "/venv/lib/python3.8/site-packages/torch/onnx/__init__.py", line 225, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 85, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 647, in _export
    proto, export_map = graph._export_onnx(
RuntimeError: ONNX export failed: Couldn't export Python operator ThreeInterpolate
Defined at:
/venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py(126): three_interpolate
/workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py(151): conv
/workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py(119): forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
/workdir/forward_scripts/../torch_points3d/models/base_architectures/unet.py(306): forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
/workdir/forward_scripts/../torch_points3d/models/base_architectures/unet.py(304): forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
/workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py(233): forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
/venv/lib/python3.8/site-packages/torch/jit/_trace.py(116): wrapper
/venv/lib/python3.8/site-packages/torch/jit/_trace.py(125): forward
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py(727): _call_impl
/venv/lib/python3.8/site-packages/torch/jit/_trace.py(1148): _get_trace_graph
/venv/lib/python3.8/site-packages/torch/onnx/utils.py(342): _trace_and_get_graph_from_model
/venv/lib/python3.8/site-packages/torch/onnx/utils.py(379): _create_jit_graph
/venv/lib/python3.8/site-packages/torch/onnx/utils.py(409): _model_to_graph
/venv/lib/python3.8/site-packages/torch/onnx/utils.py(632): _export
/venv/lib/python3.8/site-packages/torch/onnx/utils.py(85): export
/venv/lib/python3.8/site-packages/torch/onnx/__init__.py(225): export
forward_scripts/checkpoint_export2.py(66): run
forward_scripts/checkpoint_export2.py(133): main
/venv/lib/python3.8/site-packages/hydra/core/utils.py(129): run_job
/venv/lib/python3.8/site-packages/hydra/_internal/hydra.py(107): run
/venv/lib/python3.8/site-packages/hydra/_internal/utils.py(347): <lambda>
/venv/lib/python3.8/site-packages/hydra/_internal/utils.py(198): run_and_report
/venv/lib/python3.8/site-packages/hydra/_internal/utils.py(346): _run_hydra
/venv/lib/python3.8/site-packages/hydra/main.py(32): decorated_main
forward_scripts/checkpoint_export2.py(137): <module>
Graph we tried to export:
graph(%pos : Float(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0),
      %category : Long(4:32768, 32768:1, requires_grad=0, device=cuda:0),
      %FC_layer.2.0.weight : Float(5:128, 128:1, 1:1, requires_grad=1, device=cuda:0),
      %FC_layer.2.0.bias : Float(5:1, requires_grad=1, device=cuda:0),
      %402 : Float(64:3, 3:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %403 : Float(64:1, requires_grad=0, device=cuda:0),
      %405 : Float(64:64, 64:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %406 : Float(64:1, requires_grad=0, device=cuda:0),
      %408 : Float(128:64, 64:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %409 : Float(128:1, requires_grad=0, device=cuda:0),
      %411 : Float(128:131, 131:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %412 : Float(128:1, requires_grad=0, device=cuda:0),
      %414 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %415 : Float(128:1, requires_grad=0, device=cuda:0),
      %417 : Float(256:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %418 : Float(256:1, requires_grad=0, device=cuda:0),
      %420 : Float(256:259, 259:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %421 : Float(256:1, requires_grad=0, device=cuda:0),
      %423 : Float(512:256, 256:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %424 : Float(512:1, requires_grad=0, device=cuda:0),
      %426 : Float(1024:512, 512:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %427 : Float(1024:1, requires_grad=0, device=cuda:0),
      %429 : Float(256:1280, 1280:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %430 : Float(256:1, requires_grad=0, device=cuda:0),
      %432 : Float(256:256, 256:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %433 : Float(256:1, requires_grad=0, device=cuda:0),
      %435 : Float(256:384, 384:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %436 : Float(256:1, requires_grad=0, device=cuda:0),
      %438 : Float(128:256, 256:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %439 : Float(128:1, requires_grad=0, device=cuda:0),
      %441 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %442 : Float(128:1, requires_grad=0, device=cuda:0),
      %444 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %445 : Float(128:1, requires_grad=0, device=cuda:0),
      %447 : Float(128:128, 128:1, 1:1, 1:1, requires_grad=0, device=cuda:0),
      %448 : Float(128:1, requires_grad=0, device=cuda:0),
      %450 : Float(128:132, 132:1, 1:1, requires_grad=0, device=cuda:0),
      %451 : Float(128:1, requires_grad=0, device=cuda:0),
      %452 : Long(1:1, requires_grad=0, device=cpu),
      %453 : Long(1:1, requires_grad=0, device=cpu),
      %454 : Long(1:1, requires_grad=0, device=cpu),
      %455 : Long(1:1, requires_grad=0, device=cpu),
      %456 : Long(1:1, requires_grad=0, device=cpu),
      %457 : Long(1:1, requires_grad=0, device=cpu),
      %458 : Long(1:1, requires_grad=0, device=cpu),
      %459 : Long(1:1, requires_grad=0, device=cpu),
      %460 : Long(1:1, requires_grad=0, device=cpu),
      %461 : Long(1:1, requires_grad=0, device=cpu),
      %462 : Long(1:1, requires_grad=0, device=cpu),
      %463 : Long(1:1, requires_grad=0, device=cpu),
      %464 : Long(1:1, requires_grad=0, device=cpu),
      %465 : Long(1:1, requires_grad=0, device=cpu),
      %466 : Long(1:1, requires_grad=0, device=cpu),
      %467 : Long(1:1, requires_grad=0, device=cpu),
      %468 : Long(1:1, requires_grad=0, device=cpu),
      %469 : Long(1:1, requires_grad=0, device=cpu),
      %470 : Long(1:1, requires_grad=0, device=cpu),
      %471 : Long(1:1, requires_grad=0, device=cpu),
      %472 : Long(1:1, requires_grad=0, device=cpu),
      %473 : Long(1:1, requires_grad=0, device=cpu),
      %474 : Long(1:1, requires_grad=0, device=cpu),
      %475 : Long(1:1, requires_grad=0, device=cpu),
      %476 : Long(1:1, requires_grad=0, device=cpu),
      %477 : Long(1:1, requires_grad=0, device=cpu),
      %478 : Long(1:1, requires_grad=0, device=cpu),
      %479 : Long(1:1, requires_grad=0, device=cpu),
      %480 : Long(1:1, requires_grad=0, device=cpu)):
  %107 : Float(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=1](%pos) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:152:0
  %108 : Long(4:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=7](%category) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:153:0
  %109 : Int(4:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %110 : Tensor = onnx::Shape(%107)
  %111 : Tensor = onnx::Constant[value={2}]()
  %112 : Long(device=cpu) = onnx::Gather[axis=0](%110, %111) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %117 : Tensor = onnx::Unsqueeze[axes=[0]](%112)
  %118 : Tensor = onnx::Concat[axis=0](%452, %453, %117)
  %121 : Tensor = onnx::Unsqueeze[axes=[0]](%112)
  %122 : Tensor = onnx::Concat[axis=0](%454, %455, %121)
  %123 : Tensor = onnx::Shape(%118)
  %124 : Tensor = onnx::ConstantOfShape[value={1}](%123)
  %125 : Tensor = onnx::Expand(%109, %124)
  %126 : Int(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Tile(%125, %122) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %127 : Long(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=7](%126) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %128 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %129 : Tensor = onnx::Constant[value={1}]()
  %130 : Tensor = onnx::Shape(%107)
  %131 : Tensor = onnx::Gather[axis=0](%130, %129)
  %132 : Tensor = onnx::OneHot[axis=1](%127, %131, %128)
  %133 : Tensor = onnx::Cast[to=1](%132)
  %134 : Tensor = onnx::Unsqueeze[axes=[2]](%107)
  %135 : Tensor = onnx::Mul(%134, %133)
  %136 : Float(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[1], keepdims=0](%135) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:82:0
  %137 : Float(4:98304, 3:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%107) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:37:0
  %138 : Long(4:32768, 1:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %139 : Tensor = onnx::Shape(%137)
  %140 : Tensor = onnx::Constant[value={1}]()
  %141 : Long(device=cpu) = onnx::Gather[axis=0](%139, %140) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %145 : Tensor = onnx::Unsqueeze[axes=[0]](%141)
  %147 : Tensor = onnx::Concat[axis=0](%456, %145, %457)
  %149 : Tensor = onnx::Unsqueeze[axes=[0]](%141)
  %151 : Tensor = onnx::Concat[axis=0](%458, %149, %459)
  %152 : Tensor = onnx::Shape(%147)
  %153 : Tensor = onnx::ConstantOfShape[value={1}](%152)
  %154 : Tensor = onnx::Expand(%138, %153)
  %155 : Long(4:98304, 3:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Tile(%154, %151) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %156 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %157 : Tensor = onnx::Constant[value={2}]()
  %158 : Tensor = onnx::Shape(%137)
  %159 : Tensor = onnx::Gather[axis=0](%158, %157)
  %160 : Tensor = onnx::OneHot[axis=2](%155, %159, %156)
  %161 : Tensor = onnx::Cast[to=1](%160)
  %162 : Tensor = onnx::Unsqueeze[axes=[3]](%137)
  %163 : Tensor = onnx::Mul(%162, %161)
  %164 : Float(4:98304, 3:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[2], keepdims=0](%163) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:145:0
  %166 : Tensor = onnx::Shape(%137)
  %167 : Tensor = onnx::Constant[value={1}]()
  %168 : Long(device=cpu) = onnx::Gather[axis=0](%166, %167) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %172 : Tensor = onnx::Unsqueeze[axes=[0]](%168)
  %175 : Tensor = onnx::Concat[axis=0](%460, %172, %461, %462)
  %176 : Float(4:98304, 3:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%164, %175) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %177 : Float(4:1536, 3:1, 512:3, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%136) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %178 : Float(4:1536, 3:1, 512:3, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%177) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %179 : Float(4:98304, 3:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Sub(%176, %178)
  %401 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%179, %402, %403)
  %182 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%401) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %404 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%182, %405, %406)
  %185 : Float(4:2097152, 64:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%404) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %407 : Float(4:4194304, 128:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%185, %408, %409)
  %188 : Float(4:4194304, 128:32768, 512:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%407) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %189 : Float(4:65536, 128:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::MaxPool[kernel_shape=[1, 64], pads=[0, 0, 0, 0], strides=[1, 64]](%188) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:585:0
  %190 : Float(4:65536, 128:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%189) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:74:0
  %191 : Float(4:65536, 128:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%190) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:88:0
  %192 : Int(4:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %193 : Tensor = onnx::Shape(%136)
  %194 : Tensor = onnx::Constant[value={2}]()
  %195 : Long(device=cpu) = onnx::Gather[axis=0](%193, %194) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %200 : Tensor = onnx::Unsqueeze[axes=[0]](%195)
  %201 : Tensor = onnx::Concat[axis=0](%463, %464, %200)
  %204 : Tensor = onnx::Unsqueeze[axes=[0]](%195)
  %205 : Tensor = onnx::Concat[axis=0](%465, %466, %204)
  %206 : Tensor = onnx::Shape(%201)
  %207 : Tensor = onnx::ConstantOfShape[value={1}](%206)
  %208 : Tensor = onnx::Expand(%192, %207)
  %209 : Int(4:384, 128:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Tile(%208, %205) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %210 : Long(4:384, 128:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=7](%209) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:81:0
  %211 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %212 : Tensor = onnx::Constant[value={1}]()
  %213 : Tensor = onnx::Shape(%136)
  %214 : Tensor = onnx::Gather[axis=0](%213, %212)
  %215 : Tensor = onnx::OneHot[axis=1](%210, %214, %211)
  %216 : Tensor = onnx::Cast[to=1](%215)
  %217 : Tensor = onnx::Unsqueeze[axes=[2]](%136)
  %218 : Tensor = onnx::Mul(%217, %216)
  %219 : Float(4:384, 128:3, 3:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[1], keepdims=0](%218) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:82:0
  %220 : Float(4:1536, 3:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%136) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:37:0
  %221 : Long(4:8192, 1:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %222 : Tensor = onnx::Shape(%220)
  %223 : Tensor = onnx::Constant[value={1}]()
  %224 : Long(device=cpu) = onnx::Gather[axis=0](%222, %223) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %228 : Tensor = onnx::Unsqueeze[axes=[0]](%224)
  %230 : Tensor = onnx::Concat[axis=0](%467, %228, %468)
  %232 : Tensor = onnx::Unsqueeze[axes=[0]](%224)
  %234 : Tensor = onnx::Concat[axis=0](%469, %232, %470)
  %235 : Tensor = onnx::Shape(%230)
  %236 : Tensor = onnx::ConstantOfShape[value={1}](%235)
  %237 : Tensor = onnx::Expand(%221, %236)
  %238 : Long(4:24576, 3:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Tile(%237, %234) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %239 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %240 : Tensor = onnx::Constant[value={2}]()
  %241 : Tensor = onnx::Shape(%220)
  %242 : Tensor = onnx::Gather[axis=0](%241, %240)
  %243 : Tensor = onnx::OneHot[axis=2](%238, %242, %239)
  %244 : Tensor = onnx::Cast[to=1](%243)
  %245 : Tensor = onnx::Unsqueeze[axes=[3]](%220)
  %246 : Tensor = onnx::Mul(%245, %244)
  %247 : Float(4:24576, 3:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[2], keepdims=0](%246) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:145:0
  %249 : Tensor = onnx::Shape(%220)
  %250 : Tensor = onnx::Constant[value={1}]()
  %251 : Long(device=cpu) = onnx::Gather[axis=0](%249, %250) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %255 : Tensor = onnx::Unsqueeze[axes=[0]](%251)
  %258 : Tensor = onnx::Concat[axis=0](%471, %255, %472, %473)
  %259 : Float(4:24576, 3:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%247, %258) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %260 : Float(4:384, 3:1, 128:3, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%219) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %261 : Float(4:384, 3:1, 128:3, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%260) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:39:0
  %262 : Float(4:24576, 3:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Sub(%259, %261)
  %263 : Long(4:8192, 1:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %264 : Tensor = onnx::Shape(%191)
  %265 : Tensor = onnx::Constant[value={1}]()
  %266 : Long(device=cpu) = onnx::Gather[axis=0](%264, %265) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %270 : Tensor = onnx::Unsqueeze[axes=[0]](%266)
  %272 : Tensor = onnx::Concat[axis=0](%474, %270, %475)
  %274 : Tensor = onnx::Unsqueeze[axes=[0]](%266)
  %276 : Tensor = onnx::Concat[axis=0](%476, %274, %477)
  %277 : Tensor = onnx::Shape(%272)
  %278 : Tensor = onnx::ConstantOfShape[value={1}](%277)
  %279 : Tensor = onnx::Expand(%263, %278)
  %280 : Long(4:1048576, 128:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::Tile(%279, %276) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:144:0
  %281 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %282 : Tensor = onnx::Constant[value={2}]()
  %283 : Tensor = onnx::Shape(%191)
  %284 : Tensor = onnx::Gather[axis=0](%283, %282)
  %285 : Tensor = onnx::OneHot[axis=2](%280, %284, %281)
  %286 : Tensor = onnx::Cast[to=1](%285)
  %287 : Tensor = onnx::Unsqueeze[axes=[3]](%191)
  %288 : Tensor = onnx::Mul(%287, %286)
  %289 : Float(4:1048576, 128:8192, 8192:1, requires_grad=0, device=cuda:0) = onnx::ReduceSum[axes=[2], keepdims=0](%288) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:145:0
  %291 : Tensor = onnx::Shape(%191)
  %292 : Tensor = onnx::Constant[value={1}]()
  %293 : Long(device=cpu) = onnx::Gather[axis=0](%291, %292) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %297 : Tensor = onnx::Unsqueeze[axes=[0]](%293)
  %300 : Tensor = onnx::Concat[axis=0](%478, %297, %479, %480)
  %301 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%289, %300) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:146:0
  %302 : Float(4:1073152, 131:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%262, %301) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:47:0
  %410 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%302, %411, %412)
  %305 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%410) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %413 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%305, %414, %415)
  %308 : Float(4:1048576, 128:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%413) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %416 : Float(4:2097152, 256:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%308, %417, %418)
  %311 : Float(4:2097152, 256:8192, 128:64, 64:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%416) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %312 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::MaxPool[kernel_shape=[1, 64], pads=[0, 0, 0, 0], strides=[1, 64]](%311) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:585:0
  %313 : Float(4:32768, 256:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%312) # /workdir/forward_scripts/../torch_points3d/modules/pointnet2/dense.py:74:0
  %314 : Float(4:32768, 256:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%313) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:88:0
  %315 : Float(4:384, 3:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%219) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:182:0
  %316 : Float(4:33152, 259:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%314, %315) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:184:0
  %317 : Float(4:33152, 259:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%316) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:184:0
  %419 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%317, %420, %421)
  %320 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%419) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %422 : Float(4:65536, 512:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%320, %423, %424)
  %323 : Float(4:65536, 512:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%422) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %425 : Float(4:131072, 1024:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%323, %426, %427)
  %326 : Float(4:131072, 1024:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%425) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %327 : Float(4:131072, 1024:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%326) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:187:0
  %328 : Float(4:1024, 1024:1, requires_grad=0, device=cuda:0) = onnx::ReduceMax[axes=[-1], keepdims=0](%327) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:187:0
  %329 : Float(4:1024, 1024:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[2]](%328) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:194:0
  %330 : Tensor = onnx::Shape(%329)
  %331 : Tensor = onnx::Constant[value={0}]()
  %332 : Long(device=cpu) = onnx::Gather[axis=0](%330, %331) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %333 : Tensor = onnx::Shape(%329)
  %334 : Tensor = onnx::Constant[value={1}]()
  %335 : Long(device=cpu) = onnx::Gather[axis=0](%333, %334) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %336 : Tensor = onnx::Shape(%219)
  %337 : Tensor = onnx::Constant[value={1}]()
  %338 : Long(device=cpu) = onnx::Gather[axis=0](%336, %337) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %339 : Tensor = onnx::Unsqueeze[axes=[0]](%332)
  %340 : Tensor = onnx::Unsqueeze[axes=[0]](%335)
  %341 : Tensor = onnx::Unsqueeze[axes=[0]](%338)
  %342 : Tensor = onnx::Concat[axis=0](%339, %340, %341)
  %343 : Tensor = onnx::Constant[value={-1}]()
  %344 : Tensor = onnx::Reshape(%342, %343)
  %345 : Tensor = onnx::Shape(%344)
  %346 : Tensor = onnx::ConstantOfShape[value={1}](%345)
  %347 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={-1}]()
  %348 : LongTensor = onnx::Mul(%346, %347)
  %349 : Tensor = onnx::Equal(%344, %348)
  %350 : Tensor = onnx::Cast[to=9](%349)
  %351 : Tensor = onnx::Where(%350, %346, %344)
  %352 : Float(4:1024, 1024:1, 128:0, requires_grad=0, device=cuda:0) = onnx::Expand(%329, %351) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:153:0
  %353 : Float(4:163840, 1280:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%352, %314) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:122:0
  %354 : Float(4:163840, 1280:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%353) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:124:0
  %428 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%354, %429, %430)
  %357 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%428) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %431 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%357, %432, %433)
  %360 : Float(4:32768, 256:128, 128:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%431) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %361 : Float(4:32768, 256:128, 128:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%360) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:129:0
  %362 : Int(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %363 : Float(4:1536, 512:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %364 : Float(4:131072, 256:512, 512:1, requires_grad=0, device=cuda:0) = ^ThreeInterpolate()(%361, %362, %363) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:126:0
  %365 : Float(4:196608, 384:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%364, %191) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:122:0
  %366 : Float(4:196608, 384:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%365) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:124:0
  %434 : Float(4:131072, 256:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%366, %435, %436)
  %369 : Float(4:131072, 256:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%434) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %437 : Float(4:65536, 128:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%369, %438, %439)
  %372 : Float(4:65536, 128:512, 512:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%437) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %373 : Float(4:65536, 128:512, 512:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%372) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:129:0
  %374 : Int(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %375 : Float(4:98304, 32768:3, 3:1, requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
  %376 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = ^ThreeInterpolate()(%373, %374, %375) # /venv/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py:126:0
  %377 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Unsqueeze[axes=[3]](%376) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:124:0
  %440 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%377, %441, %442)
  %380 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%440) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %443 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%380, %444, %445)
  %383 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%443) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %446 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%383, %447, %448)
  %386 : Float(4:4194304, 128:32768, 32768:1, 1:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%446) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:1309:0
  %387 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Squeeze[axes=[3]](%386) # /workdir/forward_scripts/../torch_points3d/core/base_conv/dense.py:129:0
  %388 : Tensor = onnx::Constant[value= 0  1 [ CPULongType{2} ]]()
  %389 : Tensor = onnx::Constant[value={4}]()
  %390 : Long(4:131072, 32768:4, 4:1, requires_grad=0, device=cuda:0) = onnx::OneHot[axis=-1](%108, %389, %388) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:236:0
  %391 : Float(4:131072, 32768:4, 4:1, requires_grad=0, device=cuda:0) = onnx::Cast[to=1](%390) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:236:0
  %392 : Float(4:131072, 4:1, 32768:4, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%391) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:236:0
  %393 : Float(4:4325376, 132:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Concat[axis=1](%387, %392) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:237:0
  %449 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%393, %450, %451)
  %396 : Float(4:4194304, 128:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::LeakyRelu[alpha=0.01](%449) # /venv/lib/python3.8/site-packages/torch/nn/functional.py:983:0
  %397 : Float(4:163840, 5:32768, 32768:1, requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%396, %FC_layer.2.0.weight, %FC_layer.2.0.bias) # /venv/lib/python3.8/site-packages/torch/nn/modules/conv.py:258:0
  %398 : Float(4:163840, 32768:5, 5:1, requires_grad=0, device=cuda:0) = onnx::Transpose[perm=[0, 2, 1]](%397) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:239:0
  %399 : Tensor = onnx::Constant[value=-1  5 [ CPULongType{2} ]]()
  %output_labels : Float(131072:5, 5:1, requires_grad=0, device=cuda:0) = onnx::Reshape(%398, %399) # /workdir/forward_scripts/../torch_points3d/models/segmentation/pointnet2.py:239:0
  return (%output_labels)

Environment

For Training/Export:
Python – 3.8
PyTorch – 1.7.0
Docker Container in an Ubuntu System (from this dockerfile torch-points3d/Dockerfile.gpu at master · nicolas-chaulet/torch-points3d · GitHub, which is pretty much the work environment of the torch points 3d developers)

Other Steps:
ONNX to TensorRT Serialization - Jetson Xavier
Inference Baseline - Jetson Xavier (ubuntu arm64/aarch64)
Model Trained - PointNet++ from the torch points 3d framework (GitHub - nicolas-chaulet/torch-points3d: Pytorch framework for doing deep learning on point clouds.)

Thanks,

Mark

Hi,
Below link might help you with your query, Kindly check below link for all 3d support layers:

Thanks!

Thanks for the links! Looking into that information it seems like I would need to prepare a custom plugin regardless and while talking to other NVIDIA representatives on the phone it seems like this is the best approach instead of setting up the Torch Points 3D environment in the Jetson.

I will be updating this forum as I work through setting up the custom plugin and for now I will be following the steps metioned in: Comparison of the trtexec tool with plugins option and putting plugins in libProcessing: image.png…

1 Like

After working with the plan mentioned above I ran into another interesting problem… after implementing the ThreeInterpolate function I got to the point where I could compile it so I tried trtexec. After setting up the library path correctly the plugin was found but I stumbled with having another layer that is not supported either, OneHot. Considering it was a more recognizable layer I tried to find an implementation for it and there was one here: GitHub - hobrasoft/tensorrt-onehot: Implements OneHot plugin for Nvidia TensorRT

After setting it up I tried trtexec again and kept getting an error about not having an op importer for it. I tried a few more things from the TensorRT side thinking that there was a problem there to soon realize it was onnx parser related (which made sense later considering the source file of the error). As of latest version of onnx/onnx_tensorrt there should be a function to register unknown operators straight to a TensorRT Plugin:

But sadly since I am limited to using TensorRT 7.1.3 the version I have to work with doesn’t have that Fallback function implemented…:

At this moment I find myself wondering if I could use one TensorRT to generate a serialized engine (newer enough to have that fallback function) and then a different one to load it?

I am guessing it should be a no but I am running out of options.

This is the trtexec log:

&&&& RUNNING TensorRT.trtexec # trtexec --onnx=pointnet2_custom-plugin-three-interpolate_GY4_opset9_2021-12-09.onnx --best --verbose
[12/10/2021-16:42:28] [I] === Model Options ===
[12/10/2021-16:42:28] [I] Format: ONNX
[12/10/2021-16:42:28] [I] Model: pointnet2_custom-plugin-three-interpolate_GY4_opset9_2021-12-09.onnx
[12/10/2021-16:42:28] [I] Output:
[12/10/2021-16:42:28] [I] === Build Options ===
[12/10/2021-16:42:28] [I] Max batch: 1
[12/10/2021-16:42:28] [I] Workspace: 16 MB
[12/10/2021-16:42:28] [I] minTiming: 1
[12/10/2021-16:42:28] [I] avgTiming: 8
[12/10/2021-16:42:28] [I] Precision: FP32+FP16+INT8
[12/10/2021-16:42:28] [I] Calibration: Dynamic
[12/10/2021-16:42:28] [I] Safe mode: Disabled
[12/10/2021-16:42:28] [I] Save engine: 
[12/10/2021-16:42:28] [I] Load engine: 
[12/10/2021-16:42:28] [I] Builder Cache: Enabled
[12/10/2021-16:42:28] [I] NVTX verbosity: 0
[12/10/2021-16:42:28] [I] Inputs format: fp32:CHW
[12/10/2021-16:42:28] [I] Outputs format: fp32:CHW
[12/10/2021-16:42:28] [I] Input build shapes: model
[12/10/2021-16:42:28] [I] Input calibration shapes: model
[12/10/2021-16:42:28] [I] === System Options ===
[12/10/2021-16:42:28] [I] Device: 0
[12/10/2021-16:42:28] [I] DLACore: 
[12/10/2021-16:42:28] [I] Plugins:
[12/10/2021-16:42:28] [I] === Inference Options ===
[12/10/2021-16:42:28] [I] Batch: 1
[12/10/2021-16:42:28] [I] Input inference shapes: model
[12/10/2021-16:42:28] [I] Iterations: 10
[12/10/2021-16:42:28] [I] Duration: 3s (+ 200ms warm up)
[12/10/2021-16:42:28] [I] Sleep time: 0ms
[12/10/2021-16:42:28] [I] Streams: 1
[12/10/2021-16:42:28] [I] ExposeDMA: Disabled
[12/10/2021-16:42:28] [I] Spin-wait: Disabled
[12/10/2021-16:42:28] [I] Multithreading: Disabled
[12/10/2021-16:42:28] [I] CUDA Graph: Disabled
[12/10/2021-16:42:28] [I] Skip inference: Disabled
[12/10/2021-16:42:28] [I] Inputs:
[12/10/2021-16:42:28] [I] === Reporting Options ===
[12/10/2021-16:42:28] [I] Verbose: Enabled
[12/10/2021-16:42:28] [I] Averages: 10 inferences
[12/10/2021-16:42:28] [I] Percentile: 99
[12/10/2021-16:42:28] [I] Dump output: Disabled
[12/10/2021-16:42:28] [I] Profile: Disabled
[12/10/2021-16:42:28] [I] Export timing to JSON file: 
[12/10/2021-16:42:28] [I] Export output to JSON file: 
[12/10/2021-16:42:28] [I] Export profile to JSON file: 
[12/10/2021-16:42:28] [I] 
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::Proposal version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[12/10/2021-16:42:28] [V] [TRT] Registered plugin creator - ::CoordConvAC version 1
[12/10/2021-16:42:28] [E] [TRT] Could not register plugin creator -  ::ThreeInterpolate version 1
[12/10/2021-16:42:28] [E] [TRT] Could not register plugin creator -  ::OneHot version 1
----------------------------------------------------------------
Input filename:   pointnet2_custom-plugin-three-interpolate_GY4_opset9_2021-12-09.onnx
ONNX IR version:  0.0.6
Opset version:    9
Producer name:    pytorch
Producer version: 1.7
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::GridAnchor_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::NMS_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::Reorg_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::Region_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::PriorBox_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::Normalize_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::RPROI_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::BatchedNMS_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::FlattenConcat_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::CropAndResize version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::DetectionLayer_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::Proposal version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::ProposalLayer_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::PyramidROIAlign_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::ResizeNearest_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::SpecialSlice_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::InstanceNormalization_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::GenerateDetection_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::MultilevelProposeROI_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::MultilevelCropAndResize_TRT version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::CoordConvAC version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::ThreeInterpolate version 1
[12/10/2021-16:42:29] [V] [TRT] Registered plugin creator - ONNXTRT_NAMESPACE::OneHot version 1
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:203: Adding network input: pos with dtype: float32, dimensions: (4, 32768, 3)
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: pos for ONNX tensor: pos
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:203: Adding network input: category with dtype: int32, dimensions: (4, 32768)
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: category for ONNX tensor: category
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 402
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 403
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 405
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 406
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 408
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 409
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 411
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 412
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 414
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 415
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 417
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 418
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 420
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 421
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 423
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 424
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 426
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 427
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 429
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 430
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 432
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 433
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 435
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 436
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 438
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 439
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 441
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 442
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 444
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 445
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 447
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 448
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 450
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 451
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 452
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 453
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 454
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 455
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 456
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 457
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 458
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 459
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 460
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 461
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 462
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 463
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 464
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 465
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 466
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 467
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 468
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 469
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 470
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 471
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 472
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 473
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 474
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 475
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 476
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 477
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 478
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 479
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: 480
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: FC_layer.2.0.bias
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:90: Importing initializer: FC_layer.2.0.weight
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Cast_0 [Cast]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: pos
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Cast_0 [Cast] inputs: [pos -> (4, 32768, 3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/builtin_op_importers.cpp:314: Casting to type: float32
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Cast_0 for ONNX node: Cast_0
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 107 for ONNX tensor: 107
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Cast_0 [Cast] outputs: [107 -> (4, 32768, 3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Cast_1 [Cast]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: category
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Cast_1 [Cast] inputs: [category -> (4, 32768)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/builtin_op_importers.cpp:314: Casting to type: int32
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Cast_1 for ONNX node: Cast_1
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 108 for ONNX tensor: 108
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Cast_1 [Cast] outputs: [108 -> (4, 32768)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Constant_2 [Constant]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Constant_2 [Constant] inputs: 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Constant_2 [Constant] outputs: [109 -> (4, 512, 1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Shape_3 [Shape]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 107
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Shape_3 [Shape] inputs: [107 -> (4, 32768, 3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Shape_3 for ONNX node: Shape_3
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 110 for ONNX tensor: 110
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Shape_3 [Shape] outputs: [110 -> (3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Constant_4 [Constant]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Constant_4 [Constant] inputs: 
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Constant_4 [Constant] outputs: [111 -> ()], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Gather_5 [Gather]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 110
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 111
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Gather_5 [Gather] inputs: [110 -> (3)], [111 -> ()], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/builtin_op_importers.cpp:941: Using Gather axis: 0
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Gather_5 for ONNX node: Gather_5
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 112 for ONNX tensor: 112
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Gather_5 [Gather] outputs: [112 -> ()], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Unsqueeze_6 [Unsqueeze]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 112
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Unsqueeze_6 [Unsqueeze] inputs: [112 -> ()], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:1429: Original shape: (), unsqueezing to: (1)
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Unsqueeze_6 for ONNX node: Unsqueeze_6
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 117 for ONNX tensor: 117
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Unsqueeze_6 [Unsqueeze] outputs: [117 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Concat_7 [Concat]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 452
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 453
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 117
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Concat_7 [Concat] inputs: [452 -> (1)], [453 -> (1)], [117 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Concat_7 for ONNX node: Concat_7
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 118 for ONNX tensor: 118
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Concat_7 [Concat] outputs: [118 -> (3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Unsqueeze_8 [Unsqueeze]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 112
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Unsqueeze_8 [Unsqueeze] inputs: [112 -> ()], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:1429: Original shape: (), unsqueezing to: (1)
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Unsqueeze_8 for ONNX node: Unsqueeze_8
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 121 for ONNX tensor: 121
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Unsqueeze_8 [Unsqueeze] outputs: [121 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Concat_9 [Concat]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 454
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 455
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 121
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Concat_9 [Concat] inputs: [454 -> (1)], [455 -> (1)], [121 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Concat_9 for ONNX node: Concat_9
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 122 for ONNX tensor: 122
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Concat_9 [Concat] outputs: [122 -> (3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Shape_10 [Shape]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 118
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Shape_10 [Shape] inputs: [118 -> (3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Shape_10 for ONNX node: Shape_10
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 123 for ONNX tensor: 123
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Shape_10 [Shape] outputs: [123 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: ConstantOfShape_11 [ConstantOfShape]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 123
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: ConstantOfShape_11 [ConstantOfShape] inputs: [123 -> (1)], 
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: ConstantOfShape_11 for ONNX node: ConstantOfShape_11
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 124 for ONNX tensor: 124
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: ConstantOfShape_11 [ConstantOfShape] outputs: [124 -> (-1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Expand_12 [Expand]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 109
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 124
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Expand_12 [Expand] inputs: [109 -> (4, 512, 1)], [124 -> (-1)], 
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Expand_12 for ONNX node: Expand_12
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 125 for ONNX tensor: 125
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Expand_12 [Expand] outputs: [125 -> (-1, -1, -1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Tile_13 [Tile]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 125
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 122
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Tile_13 [Tile] inputs: [125 -> (-1, -1, -1)], [122 -> (3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Tile_13 for ONNX node: Tile_13
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 126 for ONNX tensor: 126
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Tile_13 [Tile] outputs: [126 -> (-1, -1, -1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Cast_14 [Cast]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 126
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Cast_14 [Cast] inputs: [126 -> (-1, -1, -1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/builtin_op_importers.cpp:314: Casting to type: int32
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Cast_14 for ONNX node: Cast_14
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 127 for ONNX tensor: 127
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Cast_14 [Cast] outputs: [127 -> (-1, -1, -1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Constant_15 [Constant]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Constant_15 [Constant] inputs: 
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Constant_15 [Constant] outputs: [128 -> (2)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Constant_16 [Constant]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Constant_16 [Constant] inputs: 
[12/10/2021-16:42:29] [W] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Constant_16 [Constant] outputs: [129 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Shape_17 [Shape]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 107
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Shape_17 [Shape] inputs: [107 -> (4, 32768, 3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Shape_17 for ONNX node: Shape_17
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 130 for ONNX tensor: 130
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Shape_17 [Shape] outputs: [130 -> (3)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: Gather_18 [Gather]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 130
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 129
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: Gather_18 [Gather] inputs: [130 -> (3)], [129 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/builtin_op_importers.cpp:941: Using Gather axis: 0
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:122: Registering layer: Gather_18 for ONNX node: Gather_18
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ImporterContext.hpp:97: Registering tensor: 131 for ONNX tensor: 131
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:180: Gather_18 [Gather] outputs: [131 -> (1)], 
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:107: Parsing node: OneHot_19 [OneHot]
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 127
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 131
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:123: Searching for input: 128
[12/10/2021-16:42:29] [V] [TRT] /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:129: OneHot_19 [OneHot] inputs: [127 -> (-1, -1, -1)], [131 -> (1)], [128 -> (2)], 
While parsing node number 19 [OneHot -> "132"]:
--- Begin node ---
input: "127"
input: "131"
input: "128"
output: "132"
name: "OneHot_19"
op_type: "OneHot"
attribute {
  name: "axis"
  i: 1
  type: INT
}

--- End node ---
ERROR: /xavier_ssd/tensorrt/TensorRT/parsers/onnx/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: OneHot
[12/10/2021-16:42:29] [E] Failed to parse onnx file
[12/10/2021-16:42:29] [E] Parsing model failed
[12/10/2021-16:42:29] [E] Engine creation failed
[12/10/2021-16:42:29] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=pointnet2_custom-plugin-three-interpolate_GY4_opset9_2021-12-09.onnx --best --verbose

Thanks,

Mark

Hi Mark,

OneHot operator is currently not supported in TensorRT. You may need to implement custom plugin.

Please check following doc for supported operators.

Thank you.

Hi @spolisetty,

As I stated in my message I had taken an approach to implement OneHot as a custom operator with one found in github so that was not the main issue metioned, the big issue was that for the TensorRT version I am using the functionality of “Fallback Function” was not implemented meaning that if there is no existing onnx parser importer available the trtexec tool fails even though I do have a custom operator available and identified. I was thought maybe performing the serialization with a newer version of that supports fallback functions and then loading the model in the version I intended to use but I would expect it to fail because of things being serialized/deserialized differently in the different versions… could it be possible though?

Thanks,

Mark

Hi,

I have managed to pass the previous roadblock related to the onnx parser importer by defining built in importers following the code for the other TRT operators at the end of the builtin_op_importers.cpp file for the 7.1.3 version (main reference being the TRT_PluginV2 importer @ line 3584). Currently managed to get the importers to a point both custom plugins layers are registered and have succesfully passed the onnx parsing step. In the validation step I now find myself with this error:

[01/07/2022-11:34:22] [E] [TRT] Where_187: condition tensor must have boolean type
[01/07/2022-11:34:22] [E] [TRT] Layer Where_187 failed validation
[01/07/2022-11:34:22] [E] [TRT] Network validation failed.
[01/07/2022-11:34:22] [E] Engine creation failed
[01/07/2022-11:34:22] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=pointnet2_custom-plugin-three-interpolate_GY4_opset9_2021-12-09.onnx --best --verbose --plugins=libnvinfer_plugin.so --plugins=/xavier_ssd/tensorrt/TensorRT/build/out/libnvonnxparser.so --plugins=libnvcaffeparser.so --explicitBatch

I verified that the condition tensor was of boolean type but this error might be in regards to the actual values inside of the tensor not being boolean. I find it weird that this is happening considering that layers right before it are not the new custom ones and are not having any problems being validated (meaning that the data should be in the right format by the time it gets here).

I tried tracking the error in the source code to see where this error is brought up but did not manage to find its origin. I can track the flow of the code up until the onnx parsing ends and the conversion to an engine begins in these files but can’t get further from that:

This would be the complete output of the trtexec tool at this time (there are some prints for debugging purposes that can be identified because they don’t have the TRT log pattern):

trtexec_w_custom_plugins_2022-01-07.txt (295.9 KB)

Thanks,

Mark

Hi,

Considering the built in Where layer implementation was giving issues I tried creating another Plugin for the Where function (as suggested by another NVIDIA representative) in an attempt to be more flexible with the data needed. I ended up getting a different error in the same place as the other one:

[01/11/2022-10:42:01] [E] [TRT] Layer: Where_187's output can not be used as shape tensor.
[01/11/2022-10:42:01] [E] [TRT] Network validation failed.
[01/11/2022-10:42:01] [E] Engine creation failed
[01/11/2022-10:42:01] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=pointnet2_custom-plugin-three-interpolate_GY4_opset9_2021-12-09.onnx --best --verbose --plugins=libnvinfer_plugin.so --plugins=/xavier_ssd/tensorrt/TensorRT/build/out/libnvonnxparser.so --plugins=libnvcaffeparser.so --explicitBatch

It seems as the object passed between those layers have certain properties that can not necessarily be worked around in the middle of the process… the only way I managed to completely pass the error was to pass in one of the inputs as output (it being something that can be converted as a ShapeTensor and did not use any function that would require condition tensor to be a boolean). For now its more of a workaround to allow more progress in following parts but it seems like it could be something that’s greatly embedded and hard to manually modify…

What should this output object contain besides corresponding shape and data type to be successful in this step?

Thanks,

Mark

Hi,

Sorry for the delay in addressing this issue.
Could you please let us know and give more details, if you’re still facing above issue or your queries unanswered.

Thank you.

Hi @Mark.RiveraMelendez ,

Any updates on your progress? Were you able to port the pointnet2 to TensorRT?

Thank you in advance.

Cheers,

1 Like

Hi @schenn

Due to many of these issues the deployment was continued c++ tensorflow. Can’t share the full details but as a general note, custom layer models are better deployed through serialization by rewriting it in a way you know they can be deployed. Frameworks can be too tricky to modify in order to achieve this type of deployment.

Good luck!

@Mark.RiveraMelendez
Hi Mark,

Can you please explain what you mean by “custom layer models are better deployed through serialization”?

Thanks!!

@Mark.RiveraMelendez I also faced issues getting a custom plugin imported with older versions of TensorRT. My solution at the time was to export the supported pieces of the model in ONNX, and then connect the unsupported layers through the TensorRT network builder.

For example, I would export an onnx model that has an output which is the input to that layer, and an input that is the output from that layer. Then I would parse the onnx field, find those inputs/outputs, and attach those tensors to the custom plugin.

e.g. ONNX input -> A -> fake_output, fake_input -> B -> output
Then through the C++ api I would turn it into
input -> A -> custom_plugin -> B -> output

I can give more precise details if it’s helpful