I tried to run the inference bench mark for UNET from “Deep Learning Inference Benchmarking Instructions” and am getting an error. The output I got is below "
sudo ./trtexec --uff=~/output_graph.uff --uffInput=input_1,1,512,512 --output=conv2d_19/Sigmoid --fp16
&&&& RUNNING TensorRT.trtexec # ./trtexec --uff=~/output_graph.uff --uffInput=input_1,1,512,512 --output=conv2d_19/Sigmoid --fp16
[04/22/2021-18:46:05] [I] === Model Options ===
[04/22/2021-18:46:05] [I] Format: UFF
[04/22/2021-18:46:05] [I] Model: ~/output_graph.uff
[04/22/2021-18:46:05] [I] Uff Inputs Layout: NCHW
[04/22/2021-18:46:05] [I] Input: input_1,1,512,512
[04/22/2021-18:46:05] [I] Output: conv2d_19/Sigmoid
[04/22/2021-18:46:05] [I] === Build Options ===
[04/22/2021-18:46:05] [I] Max batch: 1
[04/22/2021-18:46:05] [I] Workspace: 16 MB
[04/22/2021-18:46:05] [I] minTiming: 1
[04/22/2021-18:46:05] [I] avgTiming: 8
[04/22/2021-18:46:05] [I] Precision: FP32+FP16
[04/22/2021-18:46:05] [I] Calibration:
[04/22/2021-18:46:05] [I] Safe mode: Disabled
[04/22/2021-18:46:05] [I] Save engine:
[04/22/2021-18:46:05] [I] Load engine:
[04/22/2021-18:46:05] [I] Builder Cache: Enabled
[04/22/2021-18:46:05] [I] NVTX verbosity: 0
[04/22/2021-18:46:05] [I] Inputs format: fp32:CHW
[04/22/2021-18:46:05] [I] Outputs format: fp32:CHW
[04/22/2021-18:46:05] [I] Input build shapes: model
[04/22/2021-18:46:05] [I] Input calibration shapes: model
[04/22/2021-18:46:05] [I] === System Options ===
[04/22/2021-18:46:05] [I] Device: 0
[04/22/2021-18:46:05] [I] DLACore:
[04/22/2021-18:46:05] [I] Plugins:
[04/22/2021-18:46:05] [I] === Inference Options ===
[04/22/2021-18:46:05] [I] Batch: 1
[04/22/2021-18:46:05] [I] Input inference shapes: model
[04/22/2021-18:46:05] [I] Iterations: 10
[04/22/2021-18:46:05] [I] Duration: 3s (+ 200ms warm up)
[04/22/2021-18:46:05] [I] Sleep time: 0ms
[04/22/2021-18:46:05] [I] Streams: 1
[04/22/2021-18:46:05] [I] ExposeDMA: Disabled
[04/22/2021-18:46:05] [I] Spin-wait: Disabled
[04/22/2021-18:46:05] [I] Multithreading: Disabled
[04/22/2021-18:46:05] [I] CUDA Graph: Disabled
[04/22/2021-18:46:05] [I] Skip inference: Disabled
[04/22/2021-18:46:05] [I] Inputs:
[04/22/2021-18:46:05] [I] === Reporting Options ===
[04/22/2021-18:46:05] [I] Verbose: Disabled
[04/22/2021-18:46:05] [I] Averages: 10 inferences
[04/22/2021-18:46:05] [I] Percentile: 99
[04/22/2021-18:46:05] [I] Dump output: Disabled
[04/22/2021-18:46:05] [I] Profile: Disabled
[04/22/2021-18:46:05] [I] Export timing to JSON file:
[04/22/2021-18:46:05] [I] Export output to JSON file:
[04/22/2021-18:46:05] [I] Export profile to JSON file:
[04/22/2021-18:46:05] [I]
[04/22/2021-18:46:07] [E] [TRT] UffParser: Could not open ~/output_graph.uff
[04/22/2021-18:46:07] [E] Failed to parse uff file
[04/22/2021-18:46:07] [E] Parsing model failed
[04/22/2021-18:46:07] [E] Engine creation failed
[04/22/2021-18:46:07] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --uff=~/output_graph.uff --uffInput=input_1,1,512,512 --output=conv2d_19/Sigmoid --fp16
"
’
’
'Can anybody help me out to figure it out . TIA
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Hi,
[E] [TRT] UffParser: Could not open ~/output_graph.uff
Based on the log, could you check if you have put the uff file correctly?
Thanks.