Please provide the following info (check/uncheck the boxes after creating this topic):
DRIVE OS Linux 5.2.6
[k] DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
Target Operating System
[k] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
SDK Manager Version
Host Machine Version
[k] native Ubuntu 18.04
While using TesorRT Optimisation Tool. Getting error.
Converted keras tensorflow model to onnx model and trying to optimise onnx model.
Error and onnx model attached.
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applicat...
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
How to share the trtexec “”–verbose"" log.
I am attaching screenshot of error.
Still facing the same issue.
Attaching the log file for debugging.
Check you help me for proceeding further.
log.txt (3.5 KB)
When using runtime dimensions, you must create at least one optimization profile at build time. Please refer below link:
Supported data format in TRT:
@AakankshaS for information.
Could you help me how to create optimization profiles, if possible, share along with example model and optimization profile.
Please refer to the following link for better understanding