Are these timings normal for Jetson Xavier AGX?

mdl.summary()

Model: “sequential_1”


Layer (type) Output Shape Param #

lstm_1 (LSTM) (None, 150) 91200


dense_1 (Dense) (None, 1) 151

Total params: 91,351
Trainable params: 91,351
Non-trainable params: 0


mdl.fit_generator( gnTrain, epochs = 10)

Epoch 1/10
301/301 [==============================] - 20s 65ms/step - loss: 0.0026 0s - loss:
Epoch 2/10
301/301 [==============================] - 19s 64ms/step - loss: 0.0021
Epoch 3/10
301/301 [==============================] - 20s 67ms/step - loss: 0.0026
Epoch 4/10
301/301 [==============================] - 20s 67ms/step - loss: 0.0017
Epoch 5/10
301/301 [==============================] - 20s 67ms/step - loss: 0.0017
Epoch 6/10
301/301 [==============================] - 20s 66ms/step - loss: 0.0021
Epoch 7/10
301/301 [==============================] - 20s 66ms/step - loss: 0.0017
Epoch 8/10
301/301 [==============================] - 19s 62ms/step - loss: 0.0016
Epoch 9/10
301/301 [==============================] - 19s 63ms/step - loss: 0.0018
Epoch 10/10
301/301 [==============================] - 18s 61ms/step - loss: 0.0018

<keras.callbacks.callbacks.History at 0x7f18176f98>

import tensorflow

tensorflow.version

‘1.14.0’

import keras

keras.version

Using TensorFlow backend.

‘2.3.1’

tf.test.is_gpu_available(

cuda_only=True,

min_cuda_compute_capability=None

)

True

tf.test.is_built_with_cuda()

True

from tensorflow.python.client import device_lib

print(device_lib.list_local_devices())

[name: “/device:CPU:0”
device_type: “CPU”
memory_limit: 268435456
locality {
}
incarnation: 12523116503951288556
, name: “/device:XLA_CPU:0”
device_type: “XLA_CPU”
memory_limit: 17179869184
locality {
}
incarnation: 3016287810009935449
physical_device_desc: “device: XLA_CPU device”
, name: “/device:XLA_GPU:0”
device_type: “XLA_GPU”
memory_limit: 17179869184
locality {
}
incarnation: 17566754380959506424
physical_device_desc: “device: XLA_GPU device”
, name: “/device:GPU:0”
device_type: “GPU”
memory_limit: 10438272410
locality {
bus_id: 1
links {
}
}
incarnation: 14651857846486284700
physical_device_desc: “device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2”
]

Do i have the proper versions? Am i missing something?

sudha@sudhajx:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Mon_Mar_11_22:13:24_CDT_2019
Cuda compilation tools, release 10.0, V10.0.326
sudha@sudhajx:~$ nvidia-smi
bash: nvidia-smi: command not found

nvidia-smi is only for workstation cards.
Are you using it for re-training? yea those timings are okay

Thank you for your insights though! @RKK

As you might assume, am new to ML/DL. And bought Xavier as I thought it would be a one stop investment than going for a full fledged server for ML/DL developement, training and future deployment purposes.

To be honest am really not impressed by the timings, as the dataset I used is a meagre 350 odd rows. I STILL HOPE THE PROGRAM IS NOT USING THE FULL POTENTIAL OF THE GPU’S. If I am wrong, then my investment was a mistake.

WOULD SOMEONE FROM NVIDIA SAY OUT LOUD XAVIER CANNOT BE USED FOR TRAINING

Xavier is for deployment and realtime inference