Tlt-converter works on Detecnet but not on YOLO

When I run this for Detectnet on xavier nx it works fine:

~/tlt_7.1$ ./tlt-converter /home/nx/tlt_7.1/experiment_dir_final/resnet18_detector.etlt -k ajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6NjYwOGJkNWUtYjkyMy00NjQ4LTgwMTEtYzliODE2M2ZiYWZh -c /home/nx/tlt_7.1/experiment_dir_final/calibration.bin -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,384,1248 -i nchw -m 64 -t int8 -e /home/nx/tlt_7.1/experiment_dir_final/resnet18_detector.trt -b 4

But

when I run YOLO on xavier nx:

~/tlt_7.1$ ./tlt-converter -k ajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6NjYwOGJkNWUtYjkyMy00NjQ4LTgwMTEtYzliODE2M2ZiYWZh -d 3,384,1248 -o BatchedNMS -e /home/nx/tlt_7.1/export/trt.engine -i nchw -m 1 -t fp16 /home/nx/tlt_7.1/export/yolo_resnet18_epoch_100.etlt

I get this error

[ERROR] UffParser: Could not parse MetaGraph from /tmp/fileXd5o6R
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

I have built the TensorRT OSS on Jetson (ARM64)
as instucted to use with YOLO
but I cant get past errors

I forgot to mention I get the same errors when I run the tlt-converter on my ubunta x86 machine
from inside the Transfer Learning toolkit Docker container using the TLT YOLO example

Could you please double check if the key is correct or the same as running tlt-export/tlt-train?

I am still having issues when running the TLT-converter using a YOLO model retained with TLT v2. I cleared out all docker containers on my devlopment machine. Pulled TLT docker container. got a new key and then ran the YOLO example in the TLT jupyter notebook.the tlt-converter works on my devlopment machine but not on the NX. I know the Key is correct because the same key when used with detectnet and tlt-converter runs fine

~/tlt_7.1$ ./tlt-converter -k ajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6NjE1OGViN2ItOGY2My00ZTMzLWE3OWYtYWZmODBjN2VhYjU2 -d 3,384,1248 -o BatchedNMS -e /home/nx/tlt_7.1/export/trt.engine -m 1 -t fp16 -i nchw /home/nx/tlt_7.1/export/yolo_resnet18_epoch_100.etlt
[ERROR] UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Please build the libnvinfer_plugin.so. Refer to the steps deepstream_tao_apps/TRT-OSS/Jetson at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
More, make sure using GPU_ARCHS=72 for NX.

OK finally got it to work.

The issue was getting these 3 files to move

/home/nx/TensorRT/build/out/libnvinfer_plugin.so
/home/nx/TensorRT/build/out/libnvinfer_plugin.so.7.0.0
/home/nx/TensorRT/build/out/libnvinfer_plugin.so.7.0.0.1

to
/usr/lib/aarch64-linux-gnu/

This dosent move the files with out heavy modification

sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y
sudo cp pwd/out/libnvinfer_plugin.so.7.m.n /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y
sudo ldconfig

Thanks for the info.
More reference: