TensorRT С++ optimization profile

Hi @spolisetty !
I resolved previous issue and have the next question. On the preprocessImage function I have trouble. When execute application it just freeze and increase memory allocation. I found that it happens on the next step:

for (size_t i = 0; i < channels; ++i)
{
    chw.emplace_back(cv::cuda::GpuMat(input_size, CV_32FC1, gpu_input + i * input_width * input_height));
}

You can find script here, additional files I sent you via DM earlier.
trt_sample.cpp (8.6 KB)

Can you help with it?

@v.stadnichuk,

This looks like error related to image preprocess, please make sure loop is not going infinite.
Also this looks like out of scope for TensorRT.

Thank you.

Hi @spolisetty !
I resolved last issue, but I have another one. Script failed on the postprocess now at this step

std::vector cpu_output(getSizeByDim(dims) * batch_size);
(line 146)

I have error

terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc

Can you support with it?

@spolisetty
The error above looks like it is in the scope for TensorRT optimization and memory.

terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc

Do you have any clue about it? Thanks.

Can you please share issue repro new inference script and test image you’re trying. Also if possible verbose error logs for better debugging.

Thank you.

Hi @spolisetty !
I sent it to you via DM.
Thank you!