Using TLT trained models in TF (or Keras) to run inference

Hi,

How does one load the TLT trained and exported models in Tensorflow or Keras to be able to run inference?

Hi pushkar,
TLT has been designed to integrate with DeepStream video analytics. Please run inference with Deepstream or https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps.

To deploy a model trained by TLT to DeepStream, you can:
1.Generate a device specific optimized TensorRT engine, using tlt-converter which may then be ingested by DeepStream
2.Integrate the model directly in the DeepStream environment using the exported model file generated by tlt-export.

Thank you. The model runs perfectly on Deepstream.

I was wondering if it is possible to run these models in our own inference code using either

tf.saved_model.loader.load() or the karas model loader?

Sorry, but currently our workflow is only compatible with DeepStream or https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps.

Got it. Thanks!