My device is Jetson. I want to install TRT in docker container.
I downloaded TRT8.2.3 and install with deb
package, but when I installed it, I run /usr/src/tensorrt/bin/trtexec
it was 8.2.0.3
But when I check TRT Python API, it was 8.2.3.0. Is this mistake in /usr/src/tensorrt/bin/trtexec
? (8.2.3.0 and 8.2.0.3)
I need TRT8.2.3.0 in /usr/src/tensorrt/bin/trtexec
. What should I do?
When installing I followd the guide Installation Guide :: NVIDIA Deep Learning TensorRT Documentation , but in the step 4 there is only Release.gpg
and I run dpkg -i *deb
, because when I run apt update && apt install tensorrt
, it install TRT8.6.
I refer this link TensorRT Container Release Notes and download PC container. This container container TRT8.2.3.0.
After checking I suprised that Python TRT version is 8.2.3.0 and TRT version in /usr/src/tensorrt/bin/trtexec
is 8.2.0.3.
This is image on PC.
Please check this for me in both PC and Jetson. Thanks.
Hi ,
We recommend you to check the supported features from the below link.
These support matrices provide a look into the supported platforms, features, and hardware capabilities of the NVIDIA TensorRT 8.6.1 APIs, parsers, and layers.
You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation
<!--- SPDX-License-Identifier: Apache-2.0 -->
# Supported ONNX Operators
TensorRT 8.6 supports operators up to Opset 17. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT, clamping values to `+-INT_MAX` or `+-FLT_MAX` if necessary.
See below for the support matrix of ONNX operators in ONNX-TensorRT.
## Operator Support Matrix
| Operator | Supported | Supported Types | Restrictions |
|---------------------------|------------|-----------------|------------------------------------------------------------------------------------------------------------------------|
| Abs | Y | FP32, FP16, INT32 |
| Acos | Y | FP32, FP16 |
| Acosh | Y | FP32, FP16 |
| Add | Y | FP32, FP16, INT32 |
This file has been truncated. show original
Thanks!
@junshengy
Could you help me explain the missmatche problem of TRT version? Thanks.
Hi,
We recommend that you use the latest version.
Also, please check “TensorRT version” in trtexec logs.
Thank you.
@spolisetty
Also, please check “TensorRT version” in trtexec logs.
Yeah, when I checked “TensorRT version” so it is correct. It makes me confused.
system
Closed
September 6, 2023, 8:41am
8
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.