Can we convert Detectron2 to TensorRT engine?

Hi there!

I was wondering if it is possible to convert detetectron2 to the TensortRT engine?

According to this 2020 discussion, what I learn is that a solution is still being worked on - About the GPU-Accelerated Libraries category.

Is there any new information on this?

Thank you,
Amoolya

Hi,

There are several newer TensorRT releases from the 2020 discussion.

Do you have the ONNX format Detectron2 model?
If yes, it’s recommended to give it a try.

$ /usr/src/tensorrt/bin/trtexec --onnx=[model]

Thanks.

Hi, sorry for the delayed response.

So, I first used detectron2.export ( Caffe2Tracer().export onnx() ) to convert the detectron2 model to ONNX. I then used the command you recommended to convert ONNX to TensorRT, but I ran into the following error.

[04/25/2022-11:29:58] [I] === Model Options ===
[04/25/2022-11:29:58] [I] Format: ONNX
[04/25/2022-11:29:58] [I] Model: deploy.onnx
[04/25/2022-11:29:58] [I] Output:
[04/25/2022-11:29:58] [I] === Build Options ===
[04/25/2022-11:29:58] [I] Max batch: explicit batch
[04/25/2022-11:29:58] [I] Workspace: 16 MiB
[04/25/2022-11:29:58] [I] minTiming: 1
[04/25/2022-11:29:58] [I] avgTiming: 8
[04/25/2022-11:29:58] [I] Precision: FP32
[04/25/2022-11:29:58] [I] Calibration:
[04/25/2022-11:29:58] [I] Refit: Disabled
[04/25/2022-11:29:58] [I] Sparsity: Disabled
[04/25/2022-11:29:58] [I] Safe mode: Disabled
[04/25/2022-11:29:58] [I] DirectIO mode: Disabled
[04/25/2022-11:29:58] [I] Restricted mode: Disabled
[04/25/2022-11:29:58] [I] Save engine:
[04/25/2022-11:29:58] [I] Load engine:
[04/25/2022-11:29:58] [I] Profiling verbosity: 0
[04/25/2022-11:29:58] [I] Tactic sources: Using default tactic sources
[04/25/2022-11:29:58] [I] timingCacheMode: local
[04/25/2022-11:29:58] [I] timingCacheFile:
[04/25/2022-11:29:58] [I] Input(s)s format: fp32:CHW
[04/25/2022-11:29:58] [I] Output(s)s format: fp32:CHW
[04/25/2022-11:29:58] [I] Input build shapes: model
[04/25/2022-11:29:58] [I] Input calibration shapes: model
[04/25/2022-11:29:58] [I] === System Options ===
[04/25/2022-11:29:58] [I] Device: 0
[04/25/2022-11:29:58] [I] DLACore:
[04/25/2022-11:29:58] [I] Plugins:
[04/25/2022-11:29:58] [I] === Inference Options ===
[04/25/2022-11:29:58] [I] Batch: Explicit
[04/25/2022-11:29:58] [I] Input inference shapes: model
[04/25/2022-11:29:58] [I] Iterations: 10
[04/25/2022-11:29:58] [I] Duration: 3s (+ 200ms warm up)
[04/25/2022-11:29:58] [I] Sleep time: 0ms
[04/25/2022-11:29:58] [I] Idle time: 0ms
[04/25/2022-11:29:58] [I] Streams: 1
[04/25/2022-11:29:58] [I] ExposeDMA: Disabled
[04/25/2022-11:29:58] [I] Data transfers: Enabled
[04/25/2022-11:29:58] [I] Spin-wait: Disabled
[04/25/2022-11:29:58] [I] Multithreading: Disabled
[04/25/2022-11:29:58] [I] CUDA Graph: Disabled
[04/25/2022-11:29:58] [I] Separate profiling: Disabled
[04/25/2022-11:29:58] [I] Time Deserialize: Disabled
[04/25/2022-11:29:58] [I] Time Refit: Disabled
[04/25/2022-11:29:58] [I] Skip inference: Disabled
[04/25/2022-11:29:58] [I] Inputs:
[04/25/2022-11:29:58] [I] === Reporting Options ===
[04/25/2022-11:29:58] [I] Verbose: Disabled
[04/25/2022-11:29:58] [I] Averages: 10 inferences
[04/25/2022-11:29:58] [I] Percentile: 99
[04/25/2022-11:29:58] [I] Dump refittable layers:Disabled
[04/25/2022-11:29:58] [I] Dump output: Disabled
[04/25/2022-11:29:58] [I] Profile: Disabled
[04/25/2022-11:29:58] [I] Export timing to JSON file:
[04/25/2022-11:29:58] [I] Export output to JSON file:
[04/25/2022-11:29:58] [I] Export profile to JSON file:
[04/25/2022-11:29:58] [I]
[04/25/2022-11:29:58] [I] === Device Information ===
[04/25/2022-11:29:58] [I] Selected Device: NVIDIA Tegra X1
[04/25/2022-11:29:58] [I] Compute Capability: 5.3
[04/25/2022-11:29:58] [I] SMs: 1
[04/25/2022-11:29:58] [I] Compute Clock Rate: 0.9216 GHz
[04/25/2022-11:29:58] [I] Device Global Memory: 3956 MiB
[04/25/2022-11:29:58] [I] Shared Memory per SM: 64 KiB
[04/25/2022-11:29:58] [I] Memory Bus Width: 64 bits (ECC disabled)
[04/25/2022-11:29:58] [I] Memory Clock Rate: 0.01275 GHz
[04/25/2022-11:29:58] [I]
[04/25/2022-11:29:58] [I] TensorRT version: 8.2.1
[04/25/2022-11:30:00] [I] [TRT] [MemUsageChange] Init CUDA: CPU +229, GPU +0, now: CPU 248, GPU 3423 (MiB)
[04/25/2022-11:30:01] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 248 MiB, GPU 3456 MiB
[04/25/2022-11:30:01] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 278 MiB, GPU 3486 MiB
[04/25/2022-11:30:01] [I] Start parsing network model
[04/25/2022-11:30:01] [I] [TRT] ----------------------------------------------------------------
[04/25/2022-11:30:01] [I] [TRT] Input filename: deploy.onnx
[04/25/2022-11:30:01] [I] [TRT] ONNX IR version: 0.0.6
[04/25/2022-11:30:01] [I] [TRT] Opset version: 9
[04/25/2022-11:30:01] [I] [TRT] Producer name: pytorch
[04/25/2022-11:30:01] [I] [TRT] Producer version: 1.8
[04/25/2022-11:30:01] [I] [TRT] Domain:
[04/25/2022-11:30:01] [I] [TRT] Model version: 0
[04/25/2022-11:30:01] [I] [TRT] Doc string:
[04/25/2022-11:30:01] [I] [TRT] ----------------------------------------------------------------
[04/25/2022-11:30:04] [I] [TRT] No importer registered for op: AliasWithName. Attempting to import as plugin.
[04/25/2022-11:30:04] [I] [TRT] Searching for plugin: AliasWithName, plugin_version: 1, plugin_namespace:
[04/25/2022-11:30:04] [E] [TRT] ModelImporter.cpp:773: While parsing node number 0 [AliasWithName → “315”]:
[04/25/2022-11:30:04] [E] [TRT] ModelImporter.cpp:774: — Begin node —
[04/25/2022-11:30:04] [E] [TRT] ModelImporter.cpp:775: input: “data”
output: “315”
op_type: “AliasWithName”
attribute {
name: “name”
s: “data”
type: STRING
}
attribute {
name: “is_backward”
i: 0
type: INT
}
domain: “org.pytorch._caffe2”

[04/25/2022-11:30:04] [E] [TRT] ModelImporter.cpp:776: — End node —
[04/25/2022-11:30:04] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[04/25/2022-11:30:05] [E] Failed to parse onnx file
[04/25/2022-11:30:05] [I] Finish parsing network model
[04/25/2022-11:30:05] [E] Parsing model failed
[04/25/2022-11:30:05] [E] Failed to create engine from model.
[04/25/2022-11:30:05] [E] Engine set up failed

Any advice on how to solve this problem would be greatly appreciated.

Thanks!

I see that someone else has had the same problem -

And, you recommended modifying TensorRT FasterRCNN (caffe-based) example. (However, the GitHub link you provided is currently not working)

I also have a few other questions -

  1. Are there any other ways to run the ONNX format on the TensorRT engine?
  2. What other formats do the TensorRT engine support?

Do we have any updates on this, please?

Thank you!

@AastaLLL, Hey, do we have any updates on this?

Hi,

Sorry for the late update.

Based on the error log, there is a ‘string’ data type used in the ONNX model.
Unfortunately, TensorRT doesn’t support the string data type yet.

You can find the detailed support matrix below:

Is the string data type nessary for your model?
Or you can replace it with a number or one-hot representation?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.