CUDA_error_launch_failed when deploying tensorflow model

I have my custom object detection model in TensorFlow based on ResNet50. When launching my inference model I got fallowing error:

2018-02-25 10:38:15.698326: E tensorflow/stream_executor/cuda/cuda_driver.cc:1068] failed to synchronize the stop event: CUDA_ERROR_LAUNCH_FAILED
2018-02-25 10:38:15.698411: E tensorflow/stream_executor/cuda/cuda_timer.cc:54] Internal: error destroying CUDA event in context 0x45c03b0: CUDA_ERROR_LAUNCH_FAILED
2018-02-25 10:38:15.698440: E tensorflow/stream_executor/cuda/cuda_timer.cc:59] Internal: error destroying CUDA event in context 0x45c03b0: CUDA_ERROR_LAUNCH_FAILED
2018-02-25 10:38:15.698586: F tensorflow/stream_executor/cuda/cuda_dnn.cc:2045] failed to enqueue convolution on stream: CUDNN_STATUS_EXECUTION_FAILED
Aborted (core dumped)

RAM usage during this stepm from tegrastats shows only 4GB/8GB used. I tried training it on TX2 and it consumed almost all of RAM. Could this error be related to memory issues?

I’m wondering then what would be good practice to deploy my own deep learning models on TX2? Use nvidia tensorRT to optimize or nvidia’s DetectNet and then customize that model? I’d prefer to develop my own models and port them to TX2.

Hi,

There are lots of possible reasons to cause CUDA launch failure.
Could you share more environment information with us?

  1. TensorFlow version
  2. How you install TF(build from source or install public pip package)
  3. JetPack version
  4. Tegra status data

If you are interested in DetectNet, which uses Caffe framework, here is a tutorial for your reference:
https://github.com/dusty-nv/jetson-inference#building-from-source-on-jetson

Thanks.

  1. TensorFlow 1.3
  2. From pip package python2.7 (installTensorFlowJetsonTX/TX2 at master · jetsonhacks/installTensorFlowJetsonTX · GitHub)
  3. JetPack 3.1 but didn’t re-flash OS before installing packages as it was L4T 28.1 already
  4. Is this from ./tegrastats ? Don’t have log right now but showed 4/8GB RAM usage when error occured.

This error showed up during my inference code part and not during training part which seems inconsistent.

Hi,

Please remember to re-flash your device with JetPack3.1.

Different OS may contains incompatible driver version which leads to CUDA launch failure.
Please re-flash your device with JetPack3.1 to make sure your environment is valid for the pip package.

Thanks.