The deployable_v1.0 of the fpenet give me a tlt file and an a calibration.txt file that i use in deepstream nvinfer
when i run the pipeline it will generate a .engine file for my particular jetson.
once i re-train the fpenet (on a server machine with a TESLA gpu) the output directory shows me:
/getting_started_v5.0.0/experiments/fpenet/models/exp1$ ls -la | grep -v ckzip | grep -v hdf5
total 998412
drwxr-xr-x 3 root root 4096 Oct 5 08:41 .
drwxr-xr-x 3 ubuntu ubuntu 4096 Oct 5 08:13 ..
drwxr-xr-x 2 root root 4096 Oct 5 08:14 events
-rw-r--r-- 1 root root 27602387 Oct 5 08:36 events.out.tfevents.1696493696.a1a0f1ba5a3c
-rw-r--r-- 1 root root 2816 Oct 5 08:14 experiment_spec.yaml
-rw-r--r-- 1 root root 20696673 Oct 5 08:15 graph.pbtxt
-rw-r--r-- 1 root root 2238 Oct 5 08:40 int8_calibration.bin
-rw-r--r-- 1 root root 1479923 Oct 5 08:40 int8_calibration.tensorfile
-rw-r--r-- 1 root root 6353401 Oct 5 08:38 kpi_testing_all_data.json
-rw-r--r-- 1 root root 3449 Oct 5 08:38 kpi_testing_error_per_point.csv
-rw-r--r-- 1 root root 428 Oct 5 08:38 kpi_testing_error_per_region.csv
-rw-r--r-- 1 root root 1036060 Oct 5 08:41 model.int8.engine
-rw-r--r-- 1 root root 2350995 Oct 5 08:40 model.onnx
-rw-r--r-- 1 root root 5986 Oct 5 08:41 result.txt
-rw-r--r-- 1 root root 28269 Oct 5 08:41 status.json
-rw-r--r-- 1 root root 11048 Oct 5 08:36 validation.log
so i know .onnx is a generic model format, and .engine is an optimised version , but i assume this is optimised for the TESLA gpu…
i found that
trtexec --onnx=<model.onnx> --saveEngine=<model.plan>
will create a tensorrt model file… but that still is not a tlt file…
should i change my deepstream pipeline to use the tensorrt model file ? or is there a way to transpose onnx to tlt ? or tensorrt to tlt ?