TensorRT Build Killed

I have a Jetson Nano 2gb developer kit
When i try to build a tensorrt engine for yolov3-tiny-416 as in this repro: https://github.com/jkjung-avt/tensorrt_demos

Always get the message killed, I have a swap partition created and it has almost no use of it
sudo python3 onnx_to_tensorrt.py -m yolov3-tiny-416
Loading the ONNX file…
Adding yolo_layer plugins…
Building an engine. This would take a while…
(Use “–verbose” or “-v” to enable verbose logging.)

This is the use of my memory just before getting killed the process:

free -m
total used free shared buff/cache available
Mem: 1971 1880 34 0 56 24
Swap: 5081 757 4324

I don know how to get it done i have tried
sudo sysctl vm.swappiness=100
sudo sysctl vm.vfs_cache_pressure=200

but it always get killed
Also i tried in the script changing this line:
config.max_workspace_size = 1 << 30 to 29 and it worked but when i try with the dog photo do not see any detection


deepstream-app version 5.0.0
DeepStreamSDK 5.0.0
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 7.1
cuDNN Version: 8.0
PyTorch 1.6.0
TensorFlow 1.15.2


Killed usually is caused by the out of memory.
Please noted swap memory is not a GPU accessible memory but only for CPU.

To confirm this issue, please run the tegrastats at the same time to monitor the memory status.



I had the same problem. It worked when I changed the following lines in the onnx_to_tensorrt.py file:

builder.max_workspace_size = 1 << 28
config.max_workspace_size = 1 << 28

After this I was able to complete TensorRT conversion and to detect the objects (both with Yolov4 and Yolov3).

I solved it too with the following commands without changing anything in the code:

sudo swapoff -a
sudo swapon -a
sudo sysctl vm.swappiness=100

And it worked fine.

1 Like