Is my Tensorflow working correctly on TX2?

I flash the Jetpack 3.3 and install Tensorflow via:

pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp33 tensorflow-gpu

When I run a tf session in python:

Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('hello')
>>> sess = tf.Session()
2018-11-08 00:23:57.366581: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:864] ARM64 does not support NUMA - returning NUMA node zero
2018-11-08 00:23:57.366703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties: 
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.66GiB freeMemory: 3.82GiB
2018-11-08 00:23:57.366749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2018-11-08 00:23:58.121551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-11-08 00:23:58.121657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958]      0 
2018-11-08 00:23:58.121689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   N 
2018-11-08 00:23:58.121876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3413 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)

The line:

2018-11-08 00:23:58.121689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   N

makes me feel uncomfortable, because from other’s output I find it to be:

2018-11-08 00:23:58.121689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y

where the diff is N and Y

I wonder if this is normal? When I run a semantic network, it seems like GPU is engaged by checking

sudo ~/tegrastats --interval 2000

with output shown below:

RAM 3612/7846MB (lfb 168x2MB) CPU [0%@2035,0%@2036,0%@2035,0%@2035,0%@2036,0%@2035] EMC_FREQ 18%@1866 GR3D_FREQ 96%@1300 APE 150 MTS fg 7% bg 23% BCPU@36.5C MCPU@36.5C GPU@37C PLL@36.5C Tboard@31C Tdiode@35.25C PMIC@100C thermal@36.5C VDD_IN 8875/8875 VDD_CPU 841/841 VDD_GPU 2983/2983 VDD_SOC 1147/1147 VDD_WIFI 19/19 VDD_DDR 2200/2200

Hi,

It should be okay.

Device interconnect StreamExecutor’ is something related to OpenCL(kind of extension).
It won’t affect the basic function of TensorFlow.

By the way, your GPU run well (full usage) in your environment:
>> GR3D_FREQ 96%@1300

Thanks.

Thank you.