Using TLT trained models in TF (or Keras) to run inference

Hi,

How does one load the TLT trained and exported models in Tensorflow or Keras to be able to run inference?

Hi pushkar,
TLT has been designed to integrate with DeepStream video analytics. Please run inference with Deepstream or GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models.

To deploy a model trained by TLT to DeepStream, you can:
1.Generate a device specific optimized TensorRT engine, using tlt-converter which may then be ingested by DeepStream
2.Integrate the model directly in the DeepStream environment using the exported model file generated by tlt-export.

Thank you. The model runs perfectly on Deepstream.

I was wondering if it is possible to run these models in our own inference code using either

tf.saved_model.loader.load() or the karas model loader?

Sorry, but currently our workflow is only compatible with DeepStream or GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models.

Got it. Thanks!

Hello there,

Any updates so far? We absolutely need to work with the trained model outside of your pipeline. Thank you

For inference methods, by default, there are two.

  1. Run tlt-infer inside the docker. The tlt-infer can run inference against tlt file. Some detection network can also support running inference against the corresponding trt engine.
  2. Copy etlt file into deepstream. Run inference with deepstream
    Or generate trt engine with tlt-converter, then run inference with deepstream against this trt engine.
  3. Run inference with standalone script. TLT will not provide this. End user should write their own code.

More reference, see https://docs.nvidia.com/metropolis/TLT/archive/tlt-20/tlt-user-guide/text/deploying_to_deepstream.html#

it’s so complicated and time-consuming hardware-dependent approach,
it should be a simple script convertor to change this format ,

Hi @hrsk1980
This topic is very old. Please refer to latest TLT3.0 docker along with the TLT user guide.
End user can run inference against tlt model or trt engine

or run inference against etlt model