I’m unable to run the fasterRCNN models on the nano. I’ve tried my own, and tried the sample - both fail with
/usr/src/tensorrt/bin$ ./sample_fasterRCNN
Begin parsing model...
End parsing model...
Begin building engine...
End building engine...
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
Anybody been successful with fasterRCNN on the nano with tensorrt (incidentally, I can get it running with caffe…slowly)? I’ve tried disabling the desktop to give me back some memory, but still no dice…