Jetson-inference code killed when ran with Alexnet

Hello there,
I am able to run Jetson-inference imagenet code sample which runs with googlenet by default. I tried to run the same code with Alexnet as an argument. But execution gets killed while generating tensorRT engine/CUDA engine. Following is the log:

~/.../build/aarch64/bin$ ./imagenet-console orange_0.jpg output3.jpg alexnet
imagenet-console
  args (4):  0 [./imagenet-console]  1 [orange_0.jpg]  2 [output3.jpg]  3 [alexnet]  


imageNet -- loading classification network model from:
         -- prototxt     networks/alexnet.prototxt
         -- model        networks/bvlc_alexnet.caffemodel
         -- class_labels networks/ilsvrc12_synset_words.txt
         -- input_blob   'data'
         -- output_blob  'prob'
         -- batch_size   2

[TRT]  TensorRT version 5.0.6
[TRT]  detected model format - caffe  (extension '.caffemodel')
[TRT]  desired precision specified for GPU: FASTEST
[TRT]  requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]  native precisions detected for GPU:  FP32, FP16
[TRT]  selecting fastest native precision for GPU:  FP16
[TRT]  attempting to open engine cache file networks/bvlc_alexnet.caffemodel.2.1.GPU.FP16.engine
[TRT]  cache file not found, profiling network model on device GPU
[TRT]  device GPU, loading networks/alexnet.prototxt networks/bvlc_alexnet.caffemodel
[TRT]  retrieved Output tensor "prob":  1000x1x1
[TRT]  retrieved Input tensor "data":  3x227x227
[TRT]  device GPU, configuring CUDA engine
[TRT]  device GPU, building FP16:  ON
[TRT]  device GPU, building INT8:  OFF
[TRT]  device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Killed

Am I missing something?
Thanks in advance for the help!

Maybe you ran out of RAM? If you run dmesg is there evidence that OOM (out of memory killer) killed the process?

Hi, I just ran this same command, and it completed successfully without being killed or running out of memory.

The maximum system-wide memory usage during the process was 2642MB out of 3963MB. Do you have other applications running in the background?

Maybe your download of that model was corrupt. If you keep having the issue, try running these commands from terminal to re-download it:

$ cd jetson-inference/data/networks
$ wget --no-check-certificate 'https://nvidia.box.com/shared/static/5j264j7mky11q8emy4q14w3r8hl5v6zh.caffemodel' -O bvlc_alexnet.caffemodel

Thanks Dusty.
I guess chrome was running in the background. I was able to successfully run Alexnet after restarting the device.