GoogleNet inference error on TX2

I am trying to run GoogleNet model which came up with tensorRT. When I run inference through model I get error at following line.

CHECK(cudaMemcpyAsync(buffers[inputIndex], data, inputSize * sizeof(float), cudaMemcpyHostToDevice, stream));

when compiler executes this line it says

11 Aborted (core dump)

I have verified input size is correct batch of 4 images as mentioned in prototxt of GoogleNet.
Kindly tell me how I can fix this error.

Hello, can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Also, if you can provide the coredump traceback, logs, and any usage/source file you can provide will help us debug too.

Here is detail of platform and versions
Platform Jetson TX2
cuDNN v7.0.5
CUDA 9.0 ToolkitCuda I think
TensorRT 3.0
python 2.7

I tried running the googlenet on Desktop PC with GPU Quadro M5000 and it ran successfully but I cant understand the problem with Jetson TX2 board. I think there is an issue of GPU memory same code ran on desktop PC perfectly.
Here is the link to complete code.


recommend updating to latest jetpack which contains TensorRT4, and see if the compile problem persists.