Can we run a model in deep stream using the tlt command?

I have trained a resnet-18 model on a custom dataset using tlt.
I was wondering if it is possible to run the model for inference using dreamstream by a single tlt command?

I have nvcr.io/nvidia/tlt-streamanalytics image downloaded, does it contain support for dreamstream?

I have a gtx 1660 gpu. Is it possible to run the resnet-18 trained using deepstream on this gpu?

There is a command inside tlt docker for running inference.
For example,
$ tlt detectnet infernce xxx
$ tlt ssd infernce xxx

In deepstream, there is no tlt command. You can deploy etlt model or trt engine for deployment.
See DetectNet_v2 — Transfer Learning Toolkit 3.0 documentation

1 Like

how to find out if this tlt _ inference is being run using deep stream?

In deepstream, please deploy etlt model or trt engine for deployment.
It does not have “tlt xxx inference”. This “tlt xxx inference” is working in TLT docker.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.