Visionworks with Tensorflow


Has anyone been able to use visionworks with tensorflow? I am trying to use visionworks for the image processing and then tensorflow for object detection. However, tensorflow doesn’t allow other programs to share the gpu. I was wondering if there is anyway in visionworks to stop the cuda context? Or if you have ran both together how did you do it?

Here is the error tensorflow creates: current context was not created by the StreamExecutor cuda_driver API: 0x4aa180; a CUDA runtime call was likely performed without using a StreamExecutor context

Thank you for your assistance!

Edit: It seems that the issue might be solvable with this solution but the nvidia tx1 doesn’t have the nvidia-smi command


Thanks for your question.

As you said, tx1 doesn’t have nvidia-smi. We will check how to set gpu compute-mode for EXCLUSIVE_PROCESS and then update to you.



Thank you! Looking forward to hearing if it can be done.


Sorry for keeping you waiting.

NVMLibrary, which used for managing GPU device, doesn’t support tx1 and this make it not easy to set GPU configure.

Another way is to release visionworks context before calling tensorflow.
vx_context can be released by vxReleaseContext API.
Could you give it a try?

Hi AastaLLL,

Thank you for the reply! Oddly, that wasn’t necessary and I was able to run tensorflow with visionworks but the tensorflow session had to created first. I don’t know why the order matter, but for some reasons it does.