it works fine, but when I want to convert it to tensor_rt file using the command tao-converter tao_custom_128.etlt -k perception -e trt.engine -p points,1x25000x4,1x25000x4,1x204800x4 -p num_points,1,1,1 it does not work and it says:
[ERROR] ModelImporter.cpp:743: — End node —
[ERROR] ModelImporter.cpp:746: ERROR: builtin_op_importers.cpp:5374 In function importFallbackPluginImporter:
[8] Assertion failed: plugin && “Could not create plugin”
Assertion failed: plugin && “Could not create plugin”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 25000, 4)
[INFO] Detected input dimensions from the model: (-1)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 25000, 4) for input: points
[INFO] Using optimization profile opt shape: (1, 25000, 4) for input: points
[INFO] Using optimization profile max shape: (1, 204800, 4) for input: points
[INFO] Using optimization profile min shape: (1) for input: num_points
[INFO] Using optimization profile opt shape: (1) for input: num_points
[INFO] Using optimization profile max shape: (1) for input: num_points
[ERROR] 4: [network.cpp::validate::2738] Error Code 4: Internal Error (Network must have at least one output)
Does someone know how to fix it?
Just as info, I’m using the nvidia-docker image for tao-toolkit.
[INFO] [MemUsageChange] Init CUDA: CPU +12, GPU +0, now: CPU 24, GPU 78 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +264, GPU +74, now: CPU 340, GPU 152 (MiB)
[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/file0mo25a
[INFO] ONNX IR version: 0.0.8
[INFO] Opset version: 11
[INFO] Producer name: pytorch
[INFO] Producer version: 1.13.0
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[WARNING] onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] No importer registered for op: VoxelGeneratorPlugin. Attempting to import as plugin.
[INFO] Searching for plugin: VoxelGeneratorPlugin, plugin_version: 1, plugin_namespace:
[INFO] Successfully created plugin: VoxelGeneratorPlugin
[INFO] No importer registered for op: PillarScatterPlugin. Attempting to import as plugin.
[INFO] Searching for plugin: PillarScatterPlugin, plugin_version: 1, plugin_namespace:
[INFO] Successfully created plugin: PillarScatterPlugin
[INFO] No importer registered for op: DecodeBbox3DPlugin. Attempting to import as plugin.
[INFO] Searching for plugin: DecodeBbox3DPlugin, plugin_version: 1, plugin_namespace:
[INTERNAL_ERROR] Validation failed: static_cast<size_t>(num_classes_) * 2 * 4 == anchors_.size()
/opt/trt_oss_src/TensorRT/plugin/decodeBbox3DPlugin/decodeBbox3D.cpp:79
[ERROR] ModelImporter.cpp:743: — End node —
[ERROR] ModelImporter.cpp:746: ERROR: builtin_op_importers.cpp:5374 In function importFallbackPluginImporter:
[8] Assertion failed: plugin && “Could not create plugin”
Assertion failed: plugin && “Could not create plugin”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 25000, 4)
[INFO] Detected input dimensions from the model: (-1)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 25000, 4) for input: points
[INFO] Using optimization profile opt shape: (1, 25000, 4) for input: points
[INFO] Using optimization profile max shape: (1, 25000, 4) for input: points
[INFO] Using optimization profile min shape: (1) for input: num_points
[INFO] Using optimization profile opt shape: (1) for input: num_points
[INFO] Using optimization profile max shape: (1) for input: num_points
[ERROR] 4: [network.cpp::validate::2738] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine
Segmentation fault (core dumped)
Just for clearness I’ve add the entire output from the command
Is there any docker available with the TensorRT version you suggested or should I compile everything?
Because if I used the weight trained from KITTI dataseton the website and default configuration it works, but with mine (trained on a custom dataset) it doesn’t.
Anyway I’ve tried the conversion even using the tao-converter binary that you linked and it doesn’t work neither using the latest version nor oldest one and the error is still the same
So guys, I’ve found the problem! Basically, in my setup I’ve decided to have one value for anchor_rotations since in my dataset there is no rotation, which leads to an array of 1 element, but the converter works only if there are two values specified.
I do not know whose responsible for the development of this framework, but for future release I highly suggest to permit to specific no rotation in even if it is a rare case. Also because I had to redone the training, which is not ideal.