NX tensorrt

Python 3.6.9 (default, Mar 15 2022, 13:55:28)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import tensorrt
tensorrt.version
Traceback (most recent call last):
File “”, line 1, in
AttributeError: module ‘tensorrt’ has no attribute ‘version
how to fix?

Saw you open below Tensort RT related topic, so this is not an issue any more?
Pruning .onnx and convert to .engine - Jetson & Embedded Systems / Jetson Xavier NX - NVIDIA Developer Forums

when i convert .onnx to .engine on jetson nx, process as below, it

said " Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of ‘std::out_of_range’
what(): Attribute not found: axes"
please info me how to solve this.

/usr/src/tensorrt/bin/trtexec --onnx=best.onnx
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=best.onnx
[05/05/2022-15:50:18] [I] === Model Options ===
[05/05/2022-15:50:18] [I] Format: ONNX
[05/05/2022-15:50:18] [I] Model: best.onnx
[05/05/2022-15:50:18] [I] Output:
[05/05/2022-15:50:18] [I] === Build Options ===
[05/05/2022-15:50:18] [I] Max batch: 1
[05/05/2022-15:50:18] [I] Workspace: 16 MB
[05/05/2022-15:50:18] [I] minTiming: 1
[05/05/2022-15:50:18] [I] avgTiming: 8
[05/05/2022-15:50:18] [I] Precision: FP32
[05/05/2022-15:50:18] [I] Calibration:
[05/05/2022-15:50:18] [I] Safe mode: Disabled
[05/05/2022-15:50:18] [I] Save engine:
[05/05/2022-15:50:18] [I] Load engine:
[05/05/2022-15:50:18] [I] Builder Cache: Enabled
[05/05/2022-15:50:18] [I] NVTX verbosity: 0
[05/05/2022-15:50:18] [I] Inputs format: fp32:CHW
[05/05/2022-15:50:18] [I] Outputs format: fp32:CHW
[05/05/2022-15:50:18] [I] Input build shapes: model
[05/05/2022-15:50:18] [I] Input calibration shapes: model
[05/05/2022-15:50:18] [I] === System Options ===
[05/05/2022-15:50:18] [I] Device: 0
[05/05/2022-15:50:18] [I] DLACore:
[05/05/2022-15:50:18] [I] Plugins:
[05/05/2022-15:50:18] [I] === Inference Options ===
[05/05/2022-15:50:18] [I] Batch: 1
[05/05/2022-15:50:18] [I] Input inference shapes: model
[05/05/2022-15:50:18] [I] Iterations: 10
[05/05/2022-15:50:18] [I] Duration: 3s (+ 200ms warm up)
[05/05/2022-15:50:18] [I] Sleep time: 0ms
[05/05/2022-15:50:18] [I] Streams: 1
[05/05/2022-15:50:18] [I] ExposeDMA: Disabled
[05/05/2022-15:50:18] [I] Spin-wait: Disabled
[05/05/2022-15:50:18] [I] Multithreading: Disabled
[05/05/2022-15:50:18] [I] CUDA Graph: Disabled
[05/05/2022-15:50:18] [I] Skip inference: Disabled
[05/05/2022-15:50:18] [I] Inputs:
[05/05/2022-15:50:18] [I] === Reporting Options ===
[05/05/2022-15:50:18] [I] Verbose: Disabled
[05/05/2022-15:50:18] [I] Averages: 10 inferences
[05/05/2022-15:50:18] [I] Percentile: 99
[05/05/2022-15:50:18] [I] Dump output: Disabled
[05/05/2022-15:50:18] [I] Profile: Disabled
[05/05/2022-15:50:18] [I] Export timing to JSON file:
[05/05/2022-15:50:18] [I] Export output to JSON file:
[05/05/2022-15:50:18] [I] Export profile to JSON file:
[05/05/2022-15:50:18] [I]

For convert .onnx to .engine, please continous update=ing at Pruning .onnx and convert to .engine - Jetson & Embedded Systems / Jetson Xavier NX - NVIDIA Developer Forums

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.