Deploying a Small Language Model on Jetson Nano

Hi,

Here are the corresponding replies:

  1. No special configure is required.
    But if you want to use PyTorch, please install it afterward with the packages shared in the below link:
    PyTorch for Jetson

  2. Default PyTorch exporter should be fine.

  3. This can be done with trtexec binary directly.

$ /usr/src/tensorrt/bin/trtexec --onnx=[file]

You can find the TensorRT sample on our GitHub.
For example:

Thanks.