TensorRT Engine Inference

Hello everyone!

I am looking for some advice regarding Tensorrt inference. I want to run a Tensorflow model on the Drive PX2. The steps on how to convert a Tensorflow model into an TensorRT engine are clear to me, but I am not quite sure what to do next.
How can I initialize this engine and feed new data (e.g. camera as input) to it on the Drive PX2.

Thanks in advance!

Dear raphaDev,
As Tensorflow is not supported by driveworks currently, You can check to use EGLStream with NvMedia as producer and CUDA as consumer. Please refer https://docs.nvidia.com/drive/nvvib_docs/index.html#page/NVIDIA%20DRIVE%20Linux%20SDK%20Development%20Guide%2FMultimedia%2Fnvmedia_nvm_eglstream.html%23. Once you have data in CUDA buffers, you can feed it to network as shown in any TensorRT sample.

Dear SivaRamaKrishna,

thank you for the answer! I will look into it and try to implement it in this way.