Python App Cutom Model on the Jetson Nano

  1. See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_detectnetv2_model, only etlt model , ngc_key, label file are needed.

tlt-encoded-model=xxx.etlt
tlt-model-key= yourkey

Note, if you already generated trt engine, above two lines is not needed. Just set a new line as below.

model-engine-file = xxx.engine

  1. For how to use trt engine file outside of DeepStream in python, please refer to How to use tlt trained model on Jetson Nano - #3 by Morganh

  2. For -w flag, in Nano board, refer to Accelerating Peoplnet with tlt for jetson nano - #13 by Morganh

  3. It depends on TRT version and architecture. If TRT version is the same, TRT model generated using a 1070 GPU is expected to work on a 1080 GPU.
    More info in https://developer.nvidia.com/cuda-gpus#compute and Support Matrix :: NVIDIA Deep Learning TensorRT Documentation