I ran the unit test for testing the pyaerial installation, but there was an error.
[01/07/2025-07:25:48] [I] TensorRT version: 10.6.0
[01/07/2025-07:25:48] [I] Loading standard plugins
[01/07/2025-07:25:48] [I] [TRT] [MemUsageChange] Init CUDA: CPU +1, GPU +0, now: CPU 22, GPU 309 (MiB)
[01/07/2025-07:25:49] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +0, GPU +0, now: CPU 179, GPU 309 (MiB)
[01/07/2025-07:25:49] [I] Start parsing network model.
[01/07/2025-07:25:49] [I] [TRT] ----------------------------------------------------------------
[01/07/2025-07:25:49] [I] [TRT] Input filename: /opt/nvidia/cuBB/pyaerial/models/llrnet.onnx
[01/07/2025-07:25:49] [I] [TRT] ONNX IR version: 0.0.8
[01/07/2025-07:25:49] [I] [TRT] Opset version: 15
[01/07/2025-07:25:49] [I] [TRT] Producer name: tf2onnx
[01/07/2025-07:25:49] [I] [TRT] Producer version: 1.16.1 15c810
[01/07/2025-07:25:49] [I] [TRT] Domain:
[01/07/2025-07:25:49] [I] [TRT] Model version: 0
[01/07/2025-07:25:49] [I] [TRT] Doc string:
[01/07/2025-07:25:49] [I] [TRT] ----------------------------------------------------------------
[01/07/2025-07:25:49] [I] Finished parsing network model. Parse time: 0.00130364
[01/07/2025-07:25:49] [I] Set shape of input tensor input for optimization profile 0 to: MIN=1x2 OPT=12345x2 MAX=42588x2
[01/07/2025-07:25:49] [E] Error[9]: IBuilder::buildSerializedNetwork: Error Code 9: API Usage Error (Target GPU SM 70 is not supported by this TensorRT release.)
[01/07/2025-07:25:49] [E] Engine could not be created from network
[01/07/2025-07:25:49] [E] Building engine failed
[01/07/2025-07:25:49] [E] Failed to create engine from model or file.
[01/07/2025-07:25:49] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v100600] [b26] # trtexec --onnx=/opt/nvidia/cuBB/pyaerial/models/llrnet.onnx --saveEngine=/home/aerial/models/llrnet.trt --skipInference --minShapes=input:1x2 --optShapes=input:12345x2 --maxShapes=input:42588x2 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw
I found out that NVIDIA TensorRT 10.6.0 support SM 7.5 or higher capability in TensorRT support matrix.
But, my server uses a GV100GL [Tesla V100 PCIe 32GB] with SM 7.0.
What should I do to resolve this issue?
Should I have to downgrade the Aerial CUDA-Accelerated RAN to version 24.2 or 24.1?