I have trained a Resnet model using the Transfer Learning toolkit. For inference, it is suggested by the documents that I can deploy the model to Deepstream and do the inference. However, I have a Drive AGX which to the best of my knowledge doesn’t support Deepstream. Is there any alternative way for inference or live inference from a camera(preferably) using my model?
Please consider below.
Option 2: Generate a device specific optimized TensorRT engine, using tlt-converter.
Then run inference against the trt engine.
BTW, if you want to run inference in Jetson platform, please download Jetson version of tlt-converter, and generate trt engine directly in Jetson edge device instead of host PC.
See more details in https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#gen_eng_tlt_converter