trtexec --onnx=folded.onnx &&&& RUNNING TensorRT.trtexec [TensorRT v8000] # trtexec --onnx=folded.onnx [05/28/2021-10:00:07] [I] === Model Options === [05/28/2021-10:00:07] [I] Format: ONNX [05/28/2021-10:00:07] [I] Model: folded.onnx [05/28/2021-10:00:07] [I] Output: [05/28/2021-10:00:07] [I] === Build Options === [05/28/2021-10:00:07] [I] Max batch: explicit [05/28/2021-10:00:07] [I] Workspace: 16 MiB [05/28/2021-10:00:07] [I] minTiming: 1 [05/28/2021-10:00:07] [I] avgTiming: 8 [05/28/2021-10:00:07] [I] Precision: FP32 [05/28/2021-10:00:07] [I] Calibration: [05/28/2021-10:00:07] [I] Refit: Disabled [05/28/2021-10:00:07] [I] Sparsity: Disabled [05/28/2021-10:00:07] [I] Safe mode: Disabled [05/28/2021-10:00:07] [I] Enable serialization: Disabled [05/28/2021-10:00:07] [I] Save engine: [05/28/2021-10:00:07] [I] Load engine: [05/28/2021-10:00:07] [I] NVTX verbosity: 0 [05/28/2021-10:00:07] [I] Tactic sources: Using default tactic sources [05/28/2021-10:00:07] [I] timingCacheMode: local [05/28/2021-10:00:07] [I] timingCacheFile: [05/28/2021-10:00:07] [I] Input(s)s format: fp32:CHW [05/28/2021-10:00:07] [I] Output(s)s format: fp32:CHW [05/28/2021-10:00:07] [I] Input build shapes: model [05/28/2021-10:00:07] [I] Input calibration shapes: model [05/28/2021-10:00:07] [I] === System Options === [05/28/2021-10:00:07] [I] Device: 0 [05/28/2021-10:00:07] [I] DLACore: [05/28/2021-10:00:07] [I] Plugins: [05/28/2021-10:00:07] [I] === Inference Options === [05/28/2021-10:00:07] [I] Batch: Explicit [05/28/2021-10:00:07] [I] Input inference shapes: model [05/28/2021-10:00:07] [I] Iterations: 10 [05/28/2021-10:00:07] [I] Duration: 3s (+ 200ms warm up) [05/28/2021-10:00:07] [I] Sleep time: 0ms [05/28/2021-10:00:07] [I] Streams: 1 [05/28/2021-10:00:07] [I] ExposeDMA: Disabled [05/28/2021-10:00:07] [I] Data transfers: Enabled [05/28/2021-10:00:07] [I] Spin-wait: Disabled [05/28/2021-10:00:07] [I] Multithreading: Disabled [05/28/2021-10:00:07] [I] CUDA Graph: Disabled [05/28/2021-10:00:07] [I] Separate profiling: Disabled [05/28/2021-10:00:07] [I] Time Deserialize: Disabled [05/28/2021-10:00:07] [I] Time Refit: Disabled [05/28/2021-10:00:07] [I] Skip inference: Disabled [05/28/2021-10:00:07] [I] Inputs: [05/28/2021-10:00:07] [I] === Reporting Options === [05/28/2021-10:00:07] [I] Verbose: Disabled [05/28/2021-10:00:07] [I] Averages: 10 inferences [05/28/2021-10:00:07] [I] Percentile: 99 [05/28/2021-10:00:07] [I] Dump refittable layers:Disabled [05/28/2021-10:00:07] [I] Dump output: Disabled [05/28/2021-10:00:07] [I] Profile: Disabled [05/28/2021-10:00:07] [I] Export timing to JSON file: [05/28/2021-10:00:07] [I] Export output to JSON file: [05/28/2021-10:00:07] [I] Export profile to JSON file: [05/28/2021-10:00:07] [I] [05/28/2021-10:00:07] [I] === Device Information === [05/28/2021-10:00:07] [I] Selected Device: GeForce RTX 2070 Super [05/28/2021-10:00:07] [I] Compute Capability: 7.5 [05/28/2021-10:00:07] [I] SMs: 40 [05/28/2021-10:00:07] [I] Compute Clock Rate: 1.38 GHz [05/28/2021-10:00:07] [I] Device Global Memory: 7973 MiB [05/28/2021-10:00:07] [I] Shared Memory per SM: 64 KiB [05/28/2021-10:00:07] [I] Memory Bus Width: 256 bits (ECC disabled) [05/28/2021-10:00:07] [I] Memory Clock Rate: 7.001 GHz [05/28/2021-10:00:07] [I] [05/28/2021-10:00:07] [I] TensorRT version: 8000 [05/28/2021-10:00:07] [I] [TRT] [MemUsageChange] Init CUDA: CPU +267, GPU +0, now: CPU 272, GPU 661 (MiB) [05/28/2021-10:00:07] [I] [TRT] ---------------------------------------------------------------- [05/28/2021-10:00:07] [I] [TRT] Input filename: folded.onnx [05/28/2021-10:00:07] [I] [TRT] ONNX IR version: 0.0.7 [05/28/2021-10:00:07] [I] [TRT] Opset version: 11 [05/28/2021-10:00:07] [I] [TRT] Producer name: [05/28/2021-10:00:07] [I] [TRT] Producer version: [05/28/2021-10:00:07] [I] [TRT] Domain: [05/28/2021-10:00:07] [I] [TRT] Model version: 0 [05/28/2021-10:00:07] [I] [TRT] Doc string: [05/28/2021-10:00:07] [I] [TRT] ---------------------------------------------------------------- Unsupported ONNX data type: UINT8 (2) [05/28/2021-10:00:07] [E] [TRT] ModelImporter.cpp:744: ERROR: input_tensor:248 In function importInput: [8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype) && "Failed to convert ONNX date type to TensorRT data type." [05/28/2021-10:00:07] [E] Failed to parse onnx file [05/28/2021-10:00:07] [E] Parsing model failed [05/28/2021-10:00:07] [E] Engine creation failed [05/28/2021-10:00:07] [E] Engine set up failed &&&& FAILED TensorRT.trtexec [TensorRT v8000] # trtexec --onnx=folded.onnx