About trtexec

Hello,

When I executed the following command using trtexec, I got the result of passed as follows.
jetson7@jetson7-desktop:/usr/src/tensorrt/bin$ ./trtexec --onnx=/home/jetson7/Downloads/best.onnx
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/jetson7/Downloads/best.onnx
[01/05/2021-16:52:03] [I] === Model Options ===
[01/05/2021-16:52:03] [I] Format: ONNX
[01/05/2021-16:52:03] [I] Model: /home/jetson7/Downloads/best.onnx
[01/05/2021-16:52:03] [I] Output:
[01/05/2021-16:52:03] [I] === Build Options ===
[01/05/2021-16:52:03] [I] Max batch: 1
[01/05/2021-16:52:03] [I] Workspace: 16 MB
[01/05/2021-16:52:03] [I] minTiming: 1
[01/05/2021-16:52:03] [I] avgTiming: 8
[01/05/2021-16:52:03] [I] Precision: FP32
[01/05/2021-16:52:03] [I] Calibration:
[01/05/2021-16:52:03] [I] Safe mode: Disabled
[01/05/2021-16:52:03] [I] Save engine:
[01/05/2021-16:52:03] [I] Load engine:
[01/05/2021-16:52:03] [I] Builder Cache: Enabled
[01/05/2021-16:52:03] [I] NVTX verbosity: 0
[01/05/2021-16:52:03] [I] Inputs format: fp32:CHW
[01/05/2021-16:52:03] [I] Outputs format: fp32:CHW
[01/05/2021-16:52:03] [I] Input build shapes: model
[01/05/2021-16:52:03] [I] Input calibration shapes: model
[01/05/2021-16:52:03] [I] === System Options ===
[01/05/2021-16:52:03] [I] Device: 0
[01/05/2021-16:52:03] [I] DLACore:
[01/05/2021-16:52:03] [I] Plugins:
[01/05/2021-16:52:03] [I] === Inference Options ===
[01/05/2021-16:52:03] [I] Batch: 1
[01/05/2021-16:52:03] [I] Input inference shapes: model
[01/05/2021-16:52:03] [I] Iterations: 10
[01/05/2021-16:52:03] [I] Duration: 3s (+ 200ms warm up)
[01/05/2021-16:52:03] [I] Sleep time: 0ms
[01/05/2021-16:52:03] [I] Streams: 1
[01/05/2021-16:52:03] [I] ExposeDMA: Disabled
[01/05/2021-16:52:03] [I] Spin-wait: Disabled
[01/05/2021-16:52:03] [I] Multithreading: Disabled
[01/05/2021-16:52:03] [I] CUDA Graph: Disabled
[01/05/2021-16:52:03] [I] Skip inference: Disabled
[01/05/2021-16:52:03] [I] Inputs:
[01/05/2021-16:52:03] [I] === Reporting Options ===
[01/05/2021-16:52:03] [I] Verbose: Disabled
[01/05/2021-16:52:03] [I] Averages: 10 inferences
[01/05/2021-16:52:03] [I] Percentile: 99
[01/05/2021-16:52:03] [I] Dump output: Disabled
[01/05/2021-16:52:03] [I] Profile: Disabled
[01/05/2021-16:52:03] [I] Export timing to JSON file:
[01/05/2021-16:52:03] [I] Export output to JSON file:
[01/05/2021-16:52:03] [I] Export profile to JSON file:
[01/05/2021-16:52:03] [I]

Input filename: /home/jetson7/Downloads/best.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:

[01/05/2021-16:52:10] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/05/2021-16:53:31] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[01/05/2021-16:57:01] [I] [TRT] Detected 1 inputs and 3 output network tensors.
[01/05/2021-16:57:24] [I] Starting inference threads
[01/05/2021-16:57:35] [I] Warmup completed 0 queries over 200 ms
[01/05/2021-16:57:35] [I] Timing trace has 10 queries over 9.65112 s
[01/05/2021-16:57:35] [I] Trace averages of 10 runs:
[01/05/2021-16:57:35] [I] Average on 10 runs - GPU latency: 927.64 ms - Host latency: 930.108 ms (end to end 930.188 ms, enqueue 215.502 ms)
[01/05/2021-16:57:35] [I] Host Latency
[01/05/2021-16:57:35] [I] min: 707.114 ms (end to end 707.119 ms)
[01/05/2021-16:57:35] [I] max: 2812.72 ms (end to end 2812.74 ms)
[01/05/2021-16:57:35] [I] mean: 930.108 ms (end to end 930.188 ms)
[01/05/2021-16:57:35] [I] median: 721.261 ms (end to end 721.272 ms)
[01/05/2021-16:57:35] [I] percentile: 2812.72 ms at 99% (end to end 2812.74 ms at 99%)
[01/05/2021-16:57:35] [I] throughput: 1.03615 qps
[01/05/2021-16:57:35] [I] walltime: 9.65112 s
[01/05/2021-16:57:35] [I] Enqueue Time
[01/05/2021-16:57:35] [I] min: 5.23096 ms
[01/05/2021-16:57:35] [I] max: 2065.17 ms
[01/05/2021-16:57:35] [I] median: 9.43896 ms
[01/05/2021-16:57:35] [I] GPU Compute
[01/05/2021-16:57:35] [I] min: 705.535 ms
[01/05/2021-16:57:35] [I] max: 2806.93 ms
[01/05/2021-16:57:35] [I] mean: 927.64 ms
[01/05/2021-16:57:35] [I] median: 718.552 ms
[01/05/2021-16:57:35] [I] percentile: 2806.93 ms at 99%
[01/05/2021-16:57:35] [I] total compute time: 9.2764 s
&&&& PASSED TensorRT.trtexec # ./trtexec --onnx=/home/jetson7/Downloads/best.onnx

When I ran the following command to save the engine, it failed.

What’s wrong?
How do I fix it?

jetson7@jetson7-desktop:/usr/src/tensorrt/bin$ ./trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=best
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=best
[01/05/2021-16:46:27] [I] === Model Options ===
[01/05/2021-16:46:27] [I] Format: ONNX
[01/05/2021-16:46:27] [I] Model: /home/jetson7/Downloads/best.onnx
[01/05/2021-16:46:27] [I] Output:
[01/05/2021-16:46:27] [I] === Build Options ===
[01/05/2021-16:46:27] [I] Max batch: 1
[01/05/2021-16:46:27] [I] Workspace: 16 MB
[01/05/2021-16:46:27] [I] minTiming: 1
[01/05/2021-16:46:27] [I] avgTiming: 8
[01/05/2021-16:46:27] [I] Precision: FP32
[01/05/2021-16:46:27] [I] Calibration:
[01/05/2021-16:46:27] [I] Safe mode: Disabled
[01/05/2021-16:46:27] [I] Save engine: best
[01/05/2021-16:46:27] [I] Load engine:
[01/05/2021-16:46:27] [I] Builder Cache: Enabled
[01/05/2021-16:46:27] [I] NVTX verbosity: 0
[01/05/2021-16:46:27] [I] Inputs format: fp32:CHW
[01/05/2021-16:46:27] [I] Outputs format: fp32:CHW
[01/05/2021-16:46:27] [I] Input build shapes: model
[01/05/2021-16:46:27] [I] Input calibration shapes: model
[01/05/2021-16:46:27] [I] === System Options ===
[01/05/2021-16:46:27] [I] Device: 0
[01/05/2021-16:46:27] [I] DLACore:
[01/05/2021-16:46:27] [I] Plugins:
[01/05/2021-16:46:27] [I] === Inference Options ===
[01/05/2021-16:46:27] [I] Batch: 1
[01/05/2021-16:46:27] [I] Input inference shapes: model
[01/05/2021-16:46:27] [I] Iterations: 10
[01/05/2021-16:46:27] [I] Duration: 3s (+ 200ms warm up)
[01/05/2021-16:46:27] [I] Sleep time: 0ms
[01/05/2021-16:46:27] [I] Streams: 1
[01/05/2021-16:46:27] [I] ExposeDMA: Disabled
[01/05/2021-16:46:27] [I] Spin-wait: Disabled
[01/05/2021-16:46:27] [I] Multithreading: Disabled
[01/05/2021-16:46:27] [I] CUDA Graph: Disabled
[01/05/2021-16:46:27] [I] Skip inference: Disabled
[01/05/2021-16:46:27] [I] Inputs:
[01/05/2021-16:46:27] [I] === Reporting Options ===
[01/05/2021-16:46:27] [I] Verbose: Disabled
[01/05/2021-16:46:27] [I] Averages: 10 inferences
[01/05/2021-16:46:27] [I] Percentile: 99
[01/05/2021-16:46:27] [I] Dump output: Disabled
[01/05/2021-16:46:27] [I] Profile: Disabled
[01/05/2021-16:46:27] [I] Export timing to JSON file:
[01/05/2021-16:46:27] [I] Export output to JSON file:
[01/05/2021-16:46:27] [I] Export profile to JSON file:
[01/05/2021-16:46:27] [I]

Input filename: /home/jetson7/Downloads/best.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:

[01/05/2021-16:46:34] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/05/2021-16:47:54] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[01/05/2021-16:51:32] [I] [TRT] Detected 1 inputs and 3 output network tensors.
[01/05/2021-16:51:39] [E] Cannot open engine file: best
[01/05/2021-16:51:39] [E] Saving engine to file failed
[01/05/2021-16:51:39] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=best

Thank you.

Hi,

The error indicates trtexec cannot write the engine file successfully.
This because trtexec doesn’t have the written authority for the folder under /usr/bin/…

Please change the output folder, and try it again.
For example:

$  ./trtexec --onnx=/home/jetson7/Downloads/best.onnx --saveEngine=/home/jetson7/best

Thanks.

1 Like