Dear Team,
I have a custom model which has single input and multiple output.
I want it to run using C++ API sample application.
I am looking for help and support for handling the custom model with input as Binary files.
Please find the snippet :
samplesCommon::OnnxSampleParams initializeSampleParams(const samplesCommon::Args& args)
{
samplesCommon::OnnxSampleParams params;
if (args.dataDirs.empty()) //!< Use default directories if user hasn't provided directory paths
{
params.dataDirs.push_back("/workspace/TensorRT/samples/model/bev_maps/");
params.dataDirs.push_back("/workspace/TensorRT/samples/model/bev_maps/");
}
else //!< Use the data directory provided by the user
{
params.dataDirs = args.dataDirs;
}
params.onnxFileName = "model.onnx";
params.inputTensorNames.push_back("input");
params.outputTensorNames.push_back("out1");
params.outputTensorNames.push_back("out2");
params.outputTensorNames.push_back("out3");
params.outputTensorNames.push_back("out4");
params.outputTensorNames.push_back("out5");
params.dlaCore = args.useDLACore;
params.int8 = args.runInInt8;
params.fp16 = args.runInFp16;
return params;
}
I need your help in understanding the error faced:
Could not find 3.pgm in data directories:
/workspace/TensorRT/samples/sampleOnnxSFA3dNet/bev_maps/
/workspace/TensorRT/samples/sampleOnnxSFA3dNet/bev_maps/
&&&& FAILED
What is pgm file?
My input data is in form of Point Cloud(processed as BEV maps) stored in a binary file which I have to pass to the model.
Please help me with the function API which can pass the input binary file to the model directly.
Environment
Model Type :: Onnx( Translated form Pytorch)
Docker Image of TensorRT, Linux OS
TensorRT Version: v8.2.0.6
GPU Type: Quadro P2000
Nvidia Driver Version: 510.73.05
CUDA Version: 11.4
Operating System + Version: OS 18.04
Thanks and Regards,
Vyom Mishra