TensorRT engine creation in c++ api

I can really use some help.
To my understanding, I can’t use the:

  1. “Jetson-inference” if I’ve created my own model or used a detection model different from DetectNet.
  2. python API for creating TensorRT engine from a uff model.

So, the only option I can think about is creating the engine using c++ API.
Is there any code available?

Hello

Please checkout GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.. It has sample C++ detectnet implementation for jetson.

I’m sorry if I was not clear enough but I meant I don’t want to use DetectNet model. The problem is that only DetectNet implementation is available for detection model.
I made my own model written in keras with tensorflow backend, made an uff file from this network and now I would like to create an engine from the uff file.

Hello,

Sorry about the misunderstanding. I think you are looking for examples creating TRT engine from UFF using C++?

Please take a look at ~/samples/sampleUffMNIST/sampleUffMNIST.cpp (comes with your TRT installation).

For a more general sample of inference with TensorRT C++ API, see this (some parts are outdated, but still serves as good reference): http://github.com/dusty-nv/jetson-inference