Tlt-converter killed when converting detectnet_v2 on Jetson Nanto

When I ran tlt-convert on Jetson Nano, the process is killed. I have tried several times and the problems still persists.

I trained detectnet_v2 model with new data and perform Int8 Optimization. I used sample script from detectnet_v2.ipynb from docker. I realized there is different TensorRT version between in docker which is 5.1.5.0-1 and TensorRT in Jetson Nano is 6.0.1.10-1. Is there any problem there?

Here is the script and result, when I run tlt-convert:

$ ./tlt-converter resnet18_detector.etlt \
    >                -k $KEY \
    >                -c calibration.bin \
    >                -o output_cov/Sigmoid,output_bbox/BiasAdd \
    >                -d 3,384,1248 \
    >                -i nchw \
    >                -m 64 \
    >                -t int8 \
    >                -e resnet18_detector.trt \
    >                -b 4
    [WARNING] Int8 support requested on hardware without native Int8 support, performanively affected.
    [INFO] Reading Calibration Cache for calibrator: EntropyCalibration2
    [INFO] Generated calibration scales using calibration cache. Make sure that calibraatest scales.
    [INFO] To regenerate calibration cache, please delete the existing one. TensorRT wiw calibration cache.
    Killed

This is the log from log file

/var/log/kern.log:Mar 17 00:32:56 jetson-nano kernel: [106207.679948] tlt-converter invoked oom-killer: gfp_mask=0x24082c2(GFP_KERNEL|_GFP_HIGHMEM|GFP_NOWARN|_GFP_ZERO), nodemask=0, order=0, oom_score_adj=0
/var/log/kern.log:Mar 17 00:32:56 jetson-nano kernel: [106207.717406] [<ffffff80081c9b88>] oom_kill_process+0x268/0x498
/var/log/kern.log:Mar 17 00:32:56 jetson-nano kernel: [106207.718176] Out of memory: Kill process 2131 (tlt-converter) score 105 or sacrifice child
/var/log/kern.log:Mar 17 00:32:56 jetson-nano kernel: [106207.738172] Killed process 2131 (tlt-converter) total-vm:10404240kB, anon-rss:0kB, file-rss:210272kB, shmem-rss:0kB

Is this mean that the model is too large for Jetson Nano?

For OOM, please increase “-w” option or set smaller “-m” and “-b”

1 Like

Thanks!