Copying input image to model input

Description

I am trying to run a model using tensorrt C++ API, whose input dimension is [1,3,416,416]
I tried copying input image in NCHW format. Code I used for copying is given below

float* hostDataBuffer = static_cast<float*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
float * tempholder = (float*)image_rgb.data;
/*for (int32_t i = 0, volImg = 3 * 416 * 416; i < 1; ++i)
{
for (int32_t c = 0; c < 3; ++c)
{
// The color image to input should be in BGR order
for (uint32_t j = 0, volChl = 416 * 416; j < volChl; ++j)
{
hostDataBuffer[i * volImg + c * volChl + j] = tempholder[j * 3 + c];

	}
}

But the results I am getting are entirely different from python inference code.
Can any one suggest the right way to copy the data to model input.

Environment

TensorRT Version: 7.2.2
GPU Type: Nvidia MX130
Nvidia Driver Version: 456.81
CUDA Version: 11.1
CUDNN Version: 8.1.1
Operating System + Version: Windows 10
Python Version (if applicable): 3.7
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @varunvv89,

We request you to check developer guide to perform inference.
Hope this link will help you.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#perform_inference_c

Thank you.