Hello,
Is it possible to convert detectron2 model to onnx model and onnx model to TensorRT engine?
detectron2 : Detectron2: A PyTorch-based modular object detection library
opened 05:00PM - 14 May 20 UTC
closed 07:05PM - 14 May 20 UTC
installation / environment
Hey, I want to build detectron2 on a nVidia Jetson Nano. The Jetson is running o… n Linux for Tegra so I was wondering if I can just build detectron2 the same way as on every other Linux system? Has anyone made some experiences doing this and can share it with me?
I don't want to use the detectron2 models in TensorRT or something, I just want plain detectron2 to run on the Jetson.
Thanks!
Thank you.
Hi,
It is possible.
You can convert the detectron2 model into onnx via following parser:
https://detectron2.readthedocs.io/tutorials/deployment.html#caffe2-deployment
And create a TensorRT engine with our trtexec app directly.
/usr/src/tensorrt/bin/trtexec --onnx=[file] --saveEngine=[file]
But please noticed that there are some required layer doesn’t be added into onnx2trt or TensorRT.
So you will need to implement them as a plugin layer on your own.
We are trying to enable this but not finish yet.
Thanks.
1 Like
Hello,
AastaLLL:
But please noticed that there are some required layer doesn’t be added into onnx2trt or TensorRT.
So you will need to implement them as a plugin layer on your own.
We are trying to enable this but not finish yet .
Is there anything going on after this question?
url : GitHub - ultralytics/yolov3: YOLOv3 in PyTorch > ONNX > CoreML > TFLite
I have converted the model trained with yolov3(implemented using pytorch) to onnx and I want to use this with tensorRT.
I tried converting, but I got the following error.
=============== error ==================
jetson7@jetson7-desktop:~$ /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=best_trt
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=best_trt
[01/05/2021-11:25:41] [I] === Model Options ===
[01/05/2021-11:25:41] [I] Format: ONNX
[01/05/2021-11:25:41] [I] Model: /home/jetson7/Downloads/best.onnx
[01/05/2021-11:25:41] [I] Output:
[01/05/2021-11:25:41] [I] === Build Options ===
[01/05/2021-11:25:41] [I] Max batch: 1
[01/05/2021-11:25:41] [I] Workspace: 16 MB
[01/05/2021-11:25:41] [I] minTiming: 1
[01/05/2021-11:25:41] [I] avgTiming: 8
[01/05/2021-11:25:41] [I] Precision: FP32
[01/05/2021-11:25:41] [I] Calibration:
[01/05/2021-11:25:41] [I] Safe mode: Disabled
[01/05/2021-11:25:41] [I] Save engine: best_trt
[01/05/2021-11:25:41] [I] Load engine:
[01/05/2021-11:25:41] [I] Builder Cache: Enabled
[01/05/2021-11:25:41] [I] NVTX verbosity: 0
[01/05/2021-11:25:41] [I] Inputs format: fp32:CHW
[01/05/2021-11:25:41] [I] Outputs format: fp32:CHW
[01/05/2021-11:25:41] [I] Input build shapes: model
[01/05/2021-11:25:41] [I] Input calibration shapes: model
[01/05/2021-11:25:41] [I] === System Options ===
[01/05/2021-11:25:41] [I] Device: 0
[01/05/2021-11:25:41] [I] DLACore:
[01/05/2021-11:25:41] [I] Plugins:
[01/05/2021-11:25:41] [I] === Inference Options ===
[01/05/2021-11:25:41] [I] Batch: 1
[01/05/2021-11:25:41] [I] Input inference shapes: model
[01/05/2021-11:25:41] [I] Iterations: 10
[01/05/2021-11:25:41] [I] Duration: 3s (+ 200ms warm up)
[01/05/2021-11:25:41] [I] Sleep time: 0ms
[01/05/2021-11:25:41] [I] Streams: 1
[01/05/2021-11:25:41] [I] ExposeDMA: Disabled
[01/05/2021-11:25:41] [I] Spin-wait: Disabled
[01/05/2021-11:25:41] [I] Multithreading: Disabled
[01/05/2021-11:25:41] [I] CUDA Graph: Disabled
[01/05/2021-11:25:41] [I] Skip inference: Disabled
[01/05/2021-11:25:41] [I] Inputs:
[01/05/2021-11:25:41] [I] === Reporting Options ===
[01/05/2021-11:25:41] [I] Verbose: Disabled
[01/05/2021-11:25:41] [I] Averages: 10 inferences
[01/05/2021-11:25:41] [I] Percentile: 99
[01/05/2021-11:25:41] [I] Dump output: Disabled
[01/05/2021-11:25:41] [I] Profile: Disabled
[01/05/2021-11:25:41] [I] Export timing to JSON file:
[01/05/2021-11:25:41] [I] Export output to JSON file:
[01/05/2021-11:25:41] [I] Export profile to JSON file:
[01/05/2021-11:25:41] [I]
Input filename: /home/jetson7/Downloads/best.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:
[01/05/2021-11:25:49] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/05/2021-11:27:07] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[01/05/2021-11:30:33] [I] [TRT] Detected 1 inputs and 3 output network tensors.
[01/05/2021-11:31:03] [E] [TRT] FAILED_ALLOCATION: std::bad_alloc
[01/05/2021-11:31:03] [E] Engine serialization failed
[01/05/2021-11:31:03] [E] Saving engine to file failed
[01/05/2021-11:31:03] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=best_trt
Thank you.
Hi,
Sorry to keep you waiting.
Due to limited resources, this task is still ongoing and not finished yet.
We will keep you updated and really sorry for the inconvenience.
Thanks.
1 Like