Jetson Nano (2gb) OOM

Hi
I am trying to load keras model (image classification Good Vs Fail, 300x300 images, 7000 images in total, dataset data augumentation used)into my 2 GB jeton nano (LXDE mode). However i keep facing the out of memory issue even before model optimisation using tensorrt. Please i need some advise about the way to follow in order to deploy this model into the board. Thank you

Modeling:
backend.clear_session()
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(7,7), strides=2, input_shape=image_shape, activation=‘relu’, padding=‘same’))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))
model.add(Conv2D(filters=32, kernel_size=(3,3), strides=1, input_shape=image_shape, activation=‘relu’, padding=‘same’))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))
model.add(Conv2D(filters=64, kernel_size=(3,3), strides=1, input_shape=image_shape, activation=‘relu’, padding=‘same’))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))
model.add(Flatten())
model.add(Dense(units=8, activation=‘relu’))
model.add(Dropout(rate=0.2))
model.add(Dense(units=1, activation=‘sigmoid’))
model.compile(loss=‘binary_crossentropy’, optimizer=‘adam’, metrics=[‘accuracy’])

Hi,

May I know the batch size value you used?
Since the device only has 2GiB memory, please use a smaller value to run on the device.

Thanks.

My batch size is 8 only, i succeded to load .pb model however the latency when i deploy the model remain huge (more than 1 min to classify 1 image 300x300)

Hi,

Do you run it with TF-TRT or pure TensorRT?
If the resource is tight, it might cause some impact on the performance.

Thanks.

Previously I used the original tensorflow keras for prediction, after converting the model to onnx model i succeded to deploy it in my jetson nano with minimum latency. However, i still struggle to run the prediction using tensort after converting onnx model to .trt using trtexec. thanks

Hi,

Do you meet any error when running it with trtexec?
If yes, could you share the output log with us?

More, it looks like this issue is not related to Deepstream.
Is that correct?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.