I’m using TLT in order to train a custom detection model (based on detectNet) and deploy it on the Jetson Nano. So far, I managed to train the model using the notebook (created the etlt file) and converted it to an engine file on the Jetson using tlt-converter.
Looking at the python examples (feeding the engine files directly into DeepStream), I see that I need to provide nvinfer several files such as a .caffe and .prototext in addition to the engine file. How do I generate these files ?
For some cases, I would like to use TRT directly. So I converted the model (.etlt file) to a .trt file. How can I use this file outside of DeepStream in python? (the TRT python API doesn’t specify what to do with a .trt file).
While using tlt-converter I’m getting the “some tactics do not have …” warning, I know it can be solved by using the -w flag but not sure what is a good value for that. Any advise will help!
In addition, I would like to know whether a TRT generated model depends on the GPU model type or just the architecture. For example a TRT model generated using a 1070 GPU will work on a 1080 GPU? (It will obviously not work on a 2080 GPU)