What is the relation between tensorflow-tensorrt, tensorrt and tlt-converter tensorrt and their deployment strategies

I would like to know the basic differences and level of integration between tensorflow-tensorrt, tensorrt and tlt-converter tensorrt output files.

Which is better in what sense for deployment.

Hi @sk.ahmed401
You should get the differences and deployment details here



Thanks!

Thanks for the response.

  1. I understood the first post, I can optimize the model using either tf-trt, or tensorrt (with uff conversion).
  2. TLT is a package used to train model from nvidia ngc with custom data
  3. I can perform inference on step 1 models in Nano with or without deepstream.
  4. But I want to know deploying the tlt model output without deepstream

Hi @sk.ahmed401

Would correct the 2nd response from the post I have shared,
The pipeline should looks like .pb -> .onnx -> TensorRT engine.
We recommend ONNX conversion, as UFF is deprecated.

For details on TLT, suggest you to post your queries on TLT forum to get better assistance.

Thanks!

@AakankshaS, Thanks for the support. I will post a question in TLT for assistance.