TensorRT C++ API sample Application:: Single Input and Multiple Output (Custom Onnx Model)

Dear Team,

I have a custom model which has single input and multiple output.

I want it to run using C++ API sample application.

I am looking for help and support for handling the custom model with input as Binary files.

Please find the snippet :

samplesCommon::OnnxSampleParams initializeSampleParams(const samplesCommon::Args& args)
{
	samplesCommon::OnnxSampleParams params;
	if (args.dataDirs.empty()) //!< Use default directories if user hasn't provided directory paths
	{
		params.dataDirs.push_back("/workspace/TensorRT/samples/model/bev_maps/");
		params.dataDirs.push_back("/workspace/TensorRT/samples/model/bev_maps/");
	}
	else //!< Use the data directory provided by the user
	{
		params.dataDirs = args.dataDirs;
	}
	params.onnxFileName = "model.onnx";
	params.inputTensorNames.push_back("input");
	params.outputTensorNames.push_back("out1");
	params.outputTensorNames.push_back("out2");
	params.outputTensorNames.push_back("out3");
	params.outputTensorNames.push_back("out4");
	params.outputTensorNames.push_back("out5");
	params.dlaCore = args.useDLACore;
	params.int8 = args.runInInt8;
	params.fp16 = args.runInFp16;

	return params;
}

I need your help in understanding the error faced:

Could not find 3.pgm in data directories:
/workspace/TensorRT/samples/sampleOnnxSFA3dNet/bev_maps/
/workspace/TensorRT/samples/sampleOnnxSFA3dNet/bev_maps/
&&&& FAILED

What is pgm file?

My input data is in form of Point Cloud(processed as BEV maps) stored in a binary file which I have to pass to the model.

Please help me with the function API which can pass the input binary file to the model directly.

Environment

Model Type :: Onnx( Translated form Pytorch)
Docker Image of TensorRT, Linux OS
TensorRT Version: v8.2.0.6
GPU Type: Quadro P2000
Nvidia Driver Version: 510.73.05
CUDA Version: 11.4
Operating System + Version: OS 18.04

Thanks and Regards,
Vyom Mishra

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

How does your network expect the inputs (what shape, channel order etc)? PGM is an uncompressed image format that is used in this MNIST example to load the images. You may have to write your own preprocessing/data loading part.
For example, for my image application, I am using OpenCV to load images either from the camera or disk, and then changing the channel order to: (channel, height, width) after normalizing the pixel values as expected by the network.

Hi @shubhamp

I explored and found the way to pass the input to the buffer in the application.

Thanks and Regards,
Vyom Mishra