How to run tensorrt file for inference in C++ (without cuda)?

I have trained my data on detect-net v2, i got tensor-rt file, now I want to use this trt file without deep-stream and CUDA.
How to run tensor-rt file for inference in C++ (without cuda), I mean in simple C++?

Reference: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
And for detectnet_v2 network, there is postprocess code which is exposed in C++ in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp

Thanks @Morganh, but can you share simple steps to run TRT file in C++ without CUDA.

This is a topic for TensorRT. In TensorRT user guide files, you can find some useful examples.
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#c_samples_section

https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#sample_ssd
GitHub: sampleSSD/README.md
→ After the engine is built, the next steps are to serialize the engine and run the inference with the deserialized engine. For more information about these steps, see Serializing A Model In C++
→ After deserializing the engine, you can perform inference. To perform inference, see Performing Inference In C++

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.