No longer ago, I asked a topic about Detectron2 on TensorRT although the working environment was on Windows system.
- Link (Second part) : About Detectron2 on TensorRT
Currently, I have reproduced the issue on my
TX2 Jetson device. Detectron2 is quite popular nowadays that it represents one of SOTA techniques. I wish that this issue can be paid attention because I believe many people wanna use the Detectron2 on TensorRT in Jetson devices as well.
Reproduce my issue:
Here I will show what I have tried the success part that it only includes Backbone+FPN part.
- Git clone Detectron2 and install setup.py (I was installing this version. Here)
- Go to
test_detect.py(The document is here.) and put in (
- Open command line/terminal to (
detectron2/tools) folder, and type
Please check the
line 172 and line 173 below:
dummy_convert(cfg, only_backbone = True) # only backbone + FPN dummy_convert(cfg, only_backbone = False) # all
only_backbone = True , you can convert it successfully that only with backbone + FPN.
only_backbone = False , it means including whole model that it will get wrong.
Here is my entire output messages.
Traceback (most recent call last): File "test_detect.py", line 173, in <module> dummy_convert(cfg, only_backbone = False) # all File "test_detect.py", line 152, in dummy_convert export_params=True File "/usr/local/lib/python3.6/dist-packages/torch/onnx/__init__.py", line 143, in export strip_doc_string, dynamic_axes, keep_initializers_as_inputs) File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 66, in export dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs) File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 382, in _export fixed_batch_size=fixed_batch_size) File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 249, in _model_to_graph graph, torch_out = _trace_and_get_graph_from_model(model, args, training) File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 206, in _trace_and_get_graph_from_model trace, torch_out, inputs_states = torch.jit.get_trace_graph(model, args, _force_outplace=True, _return_inputs_states=True) File "/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py", line 275, in get_trace_graph return LegacyTracedModule(f, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py", line 355, in forward out_vars, _ = _flatten(out) RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type Instances
We knew if we wanna use the model on TensorRT that we have to export the onnx model then converting onnx model to TensorRT engine. However, there are many functions in Detectron2 which were written by
Python class so that we cannot export the model to
onnx model because of
Python class issue.
- Pytorch :
- Detectron2 :
- CUDA :
- Python :
- JetPack version :
- cuDNN version :
- cmake version :
- onnxruntime-gpu-tensorrt :