Error in trtexec conversion model in deepstream container: Cuda failure: forward compatibility was attempted on non supported HW Aborted (core dumped)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
RTX 2060
• DeepStream Version
Deepstream Container 6.2

• JetPack Version (valid for Jetson only)
• TensorRT Version
8.2
• NVIDIA GPU Driver Version (valid for GPU only)
In container:

nvidia-smi
Wed Jun 28 09:56:50 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.182.03   Driver Version: 470.182.03   CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:05:00.0  On |                  N/A |
|  0%   53C    P5    20W / 160W |   1607MiB /  5932MiB |     19%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

trtexec --onnx=bird_1810_sf.onnx  --saveEngine=bird_1810_sf.trt --workspace=1024 --best
&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # trtexec --onnx=bird_1810_sf.onnx --saveEngine=bird_1810_sf.trt --workspace=1024 --best
[06/28/2023-09:55:40] [W] --workspace flag has been deprecated by --memPoolSize flag.
[06/28/2023-09:55:40] [I] === Model Options ===
[06/28/2023-09:55:40] [I] Format: ONNX
[06/28/2023-09:55:40] [I] Model: bird_1810_sf.onnx
[06/28/2023-09:55:40] [I] Output:
[06/28/2023-09:55:40] [I] === Build Options ===
[06/28/2023-09:55:40] [I] Max batch: explicit batch
[06/28/2023-09:55:40] [I] Memory Pools: workspace: 1024 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[06/28/2023-09:55:40] [I] minTiming: 1
[06/28/2023-09:55:40] [I] avgTiming: 8
[06/28/2023-09:55:40] [I] Precision: FP32+FP16+INT8
[06/28/2023-09:55:40] [I] LayerPrecisions: 
[06/28/2023-09:55:40] [I] Calibration: Dynamic
[06/28/2023-09:55:40] [I] Refit: Disabled
[06/28/2023-09:55:40] [I] Sparsity: Disabled
[06/28/2023-09:55:40] [I] Safe mode: Disabled
[06/28/2023-09:55:40] [I] DirectIO mode: Disabled
[06/28/2023-09:55:40] [I] Restricted mode: Disabled
[06/28/2023-09:55:40] [I] Build only: Disabled
[06/28/2023-09:55:40] [I] Save engine: bird_1810_sf.trt
[06/28/2023-09:55:40] [I] Load engine: 
[06/28/2023-09:55:40] [I] Profiling verbosity: 0
[06/28/2023-09:55:40] [I] Tactic sources: Using default tactic sources
[06/28/2023-09:55:40] [I] timingCacheMode: local
[06/28/2023-09:55:40] [I] timingCacheFile: 
[06/28/2023-09:55:40] [I] Heuristic: Disabled
[06/28/2023-09:55:40] [I] Preview Features: Use default preview flags.
[06/28/2023-09:55:40] [I] Input(s)s format: fp32:CHW
[06/28/2023-09:55:40] [I] Output(s)s format: fp32:CHW
[06/28/2023-09:55:40] [I] Input build shapes: model
[06/28/2023-09:55:40] [I] Input calibration shapes: model
[06/28/2023-09:55:40] [I] === System Options ===
[06/28/2023-09:55:40] [I] Device: 0
[06/28/2023-09:55:40] [I] DLACore: 
[06/28/2023-09:55:40] [I] Plugins:
[06/28/2023-09:55:40] [I] === Inference Options ===
[06/28/2023-09:55:40] [I] Batch: Explicit
[06/28/2023-09:55:40] [I] Input inference shapes: model
[06/28/2023-09:55:40] [I] Iterations: 10
[06/28/2023-09:55:40] [I] Duration: 3s (+ 200ms warm up)
[06/28/2023-09:55:40] [I] Sleep time: 0ms
[06/28/2023-09:55:40] [I] Idle time: 0ms
[06/28/2023-09:55:40] [I] Streams: 1
[06/28/2023-09:55:40] [I] ExposeDMA: Disabled
[06/28/2023-09:55:40] [I] Data transfers: Enabled
[06/28/2023-09:55:40] [I] Spin-wait: Disabled
[06/28/2023-09:55:40] [I] Multithreading: Disabled
[06/28/2023-09:55:40] [I] CUDA Graph: Disabled
[06/28/2023-09:55:40] [I] Separate profiling: Disabled
[06/28/2023-09:55:40] [I] Time Deserialize: Disabled
[06/28/2023-09:55:40] [I] Time Refit: Disabled
[06/28/2023-09:55:40] [I] NVTX verbosity: 0
[06/28/2023-09:55:40] [I] Persistent Cache Ratio: 0
[06/28/2023-09:55:40] [I] Inputs:
[06/28/2023-09:55:40] [I] === Reporting Options ===
[06/28/2023-09:55:40] [I] Verbose: Disabled
[06/28/2023-09:55:40] [I] Averages: 10 inferences
[06/28/2023-09:55:40] [I] Percentiles: 90,95,99
[06/28/2023-09:55:40] [I] Dump refittable layers:Disabled
[06/28/2023-09:55:40] [I] Dump output: Disabled
[06/28/2023-09:55:40] [I] Profile: Disabled
[06/28/2023-09:55:40] [I] Export timing to JSON file: 
[06/28/2023-09:55:40] [I] Export output to JSON file: 
[06/28/2023-09:55:40] [I] Export profile to JSON file: 
[06/28/2023-09:55:40] [I] 
Cuda failure: forward compatibility was attempted on non supported HW
Aborted (core dumped)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

the TensorRT version should be TRT 8.5.2.2 in DS6.2, please refer to this table. did you change any software version?

sorry,my error for trt version.but this is not reason.when I update the gpu driver version for host ,I got the right result.so thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.