Failure to run prepare_ds_trtis_model_repo.sh

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX2080TI
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0+cuda10.2
• NVIDIA GPU Driver Version (valid for GPU only) 440.33.01

The following error occurs when running ./prepare_ds_trtis_model_repo.sh.
How can I solve this problem?

&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --calib=models/Secondary_VehicleTypes/cal_trt.bin --deploy=models/Secondary_VehicleTypes/resnet18.prototxt --model=models/Secondary_VehicleTypes/resnet18.caffemodel --maxBatch=16 --saveEngine=trtis_model_repo/Secondary_VehicleTypes/1/resnet18.caffemodel_b16_gpu0_int8.engine --buildOnly --output=predictions/Softmax --int8
[06/03/2020-15:06:58] [I] === Model Options ===
[06/03/2020-15:06:58] [I] Format: Caffe
[06/03/2020-15:06:58] [I] Model: models/Secondary_VehicleTypes/resnet18.caffemodel
[06/03/2020-15:06:58] [I] Prototxt: models/Secondary_VehicleTypes/resnet18.prototxt
[06/03/2020-15:06:58] [I] Output: predictions/Softmax
[06/03/2020-15:06:58] [I] === Build Options ===
[06/03/2020-15:06:58] [I] Max batch: 16
[06/03/2020-15:06:58] [I] Workspace: 16 MB
[06/03/2020-15:06:58] [I] minTiming: 1
[06/03/2020-15:06:58] [I] avgTiming: 8
[06/03/2020-15:06:58] [I] Precision: INT8
[06/03/2020-15:06:58] [I] Calibration: models/Secondary_VehicleTypes/cal_trt.bin
[06/03/2020-15:06:58] [I] Safe mode: Disabled
[06/03/2020-15:06:58] [I] Save engine: trtis_model_repo/Secondary_VehicleTypes/1/resnet18.caffemodel_b16_gpu0_int8.engine
[06/03/2020-15:06:58] [I] Load engine:
[06/03/2020-15:06:58] [I] Inputs format: fp32:CHW
[06/03/2020-15:06:58] [I] Outputs format: fp32:CHW
[06/03/2020-15:06:58] [I] Input build shapes: model
[06/03/2020-15:06:58] [I] === System Options ===
[06/03/2020-15:06:58] [I] Device: 0
[06/03/2020-15:06:58] [I] DLACore:
[06/03/2020-15:06:58] [I] Plugins:
[06/03/2020-15:06:58] [I] === Inference Options ===
[06/03/2020-15:06:58] [I] Batch: 1
[06/03/2020-15:06:58] [I] Iterations: 10
[06/03/2020-15:06:58] [I] Duration: 3s (+ 200ms warm up)
[06/03/2020-15:06:58] [I] Sleep time: 0ms
[06/03/2020-15:06:58] [I] Streams: 1
[06/03/2020-15:06:58] [I] ExposeDMA: Disabled
[06/03/2020-15:06:58] [I] Spin-wait: Disabled
[06/03/2020-15:06:58] [I] Multithreading: Disabled
[06/03/2020-15:06:58] [I] CUDA Graph: Disabled
[06/03/2020-15:06:58] [I] Skip inference: Enabled
[06/03/2020-15:06:58] [I] Input inference shapes: model
[06/03/2020-15:06:58] [I] Inputs:
[06/03/2020-15:06:58] [I] === Reporting Options ===
[06/03/2020-15:06:58] [I] Verbose: Disabled
[06/03/2020-15:06:58] [I] Averages: 10 inferences
[06/03/2020-15:06:58] [I] Percentile: 99
[06/03/2020-15:06:58] [I] Dump output: Disabled
[06/03/2020-15:06:58] [I] Profile: Disabled
[06/03/2020-15:06:58] [I] Export timing to JSON file:
[06/03/2020-15:06:58] [I] Export output to JSON file:
[06/03/2020-15:06:58] [I] Export profile to JSON file:
[06/03/2020-15:06:58] [I]
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc
./prepare_ds_trtis_model_repo.sh: line 69: 29236 Aborted (core dumped) /usr/src/tensorrt/bin/trtexec --calib=models/Secondary_VehicleTypes/cal_trt.bin --deploy=models/Secondary_VehicleTypes/resnet18.prototxt --model=models/Secondary_VehicleTypes/resnet18.caffemodel --maxBatch=16 --saveEngine=trtis_model_repo/Secondary_VehicleTypes/1/resnet18.caffemodel_b16_gpu0_int8.engine --buildOnly --output=predictions/Softmax --int8

Hi
Can you lower down the batch size to see what happened?
seems like out of memory issue.

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks