Error importing keras.models in tensorflow 2.5.0+nv21.8 / jetpack v46

I apologize if you have already solved my question.

Using:
$ sudo pip3 install --pre --extra-index-url Index of /compute/redist/jp/v46 tensorflow==2.5.0+nv21.8

I have a code where I need to use the following library to load the weights of my previously trained model:

from keras.models import load_model

This is what I try to do with my code to load the weights of a file that contains them:

model = load_model (‘/ home / steven / Documents / tape_detection / model.h5’)

Can you help me think of a solution please?

This is the error:

steven@steven-desktop:~$ python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import tensorflow as tf
2021-12-13 14:08:18.922956: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
print(tf.version)
2.5.0
from keras.models import load_model
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/keras/init.py”, line 25, in
from keras import models
File “/usr/local/lib/python3.6/dist-packages/keras/models.py”, line 20, in
from keras import metrics as metrics_module
File “/usr/local/lib/python3.6/dist-packages/keras/metrics.py”, line 27, in
from keras import activations
File “/usr/local/lib/python3.6/dist-packages/keras/activations.py”, line 20, in
from keras.layers import advanced_activations
File “/usr/local/lib/python3.6/dist-packages/keras/layers/init.py”, line 148, in
from keras.layers.normalization import LayerNormalization
ImportError: cannot import name ‘LayerNormalization’

Hi,

Based on the error, it looks like a compatible issue.
Which Keras version do you use?

We can import Keras 2.5.0 successfully with TensorFlow 2.5.0+JetPack 4.6.

>>> import tensorflow as tf
2021-12-23 06:27:51.280060: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
>>> tf.__version__
'2.5.0'
>>> import keras
>>> keras.__version__
'2.5.0'
>>> from keras.models import load_model
>>>

Thanks.

1 Like

Hi friend @AastaLLL .
The problem was compatibility.
I had Kera 2.6.0 installed so I removed it and installed version 2.5.0 as you show.

import tensorflow as tf
2021-12-25 15:59:33.895974: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
tf.version
‘2.5.0’
import keras
keras.version
‘2.5.0’
from keras.models import load_model

now I have another question.
when I run the following code.

import tensorflow as tf
from keras.models import load_model

model = load_model (’/ home / steven / Documents / tape_detection / model.h5’)

I get what you can see at the bottom of the comment (only that happens to me when I load that .h5 file specifically, it can be very heavy maybe, I have a 4GB Jetson Nano with a 64GB SD, maybe I should configure SWAP? ). Could you help me again please with some advice? I would appreciate.

2021-12-25 17:09:52.601216: I tensorflow/core/common_runtime/bfc_allocator.cc:1075] Stats:
Limit: 197087232
InUse: 187444224
MaxInUse: 187444480
NumAllocs: 436
MaxAllocSize: 24182272
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0

2021-12-25 17:09:52.601296: W tensorflow/core/common_runtime/bfc_allocator.cc:473] *****************************_********************************************************************xx
2021-12-25 17:09:52.696201: W tensorflow/compiler/mlir/tools/kernel_gen/tf_cuda_runtime_wrappers.cc:72] ‘cuStreamGetCtx(stream, &context)’ failed with ‘CUDA_ERROR_LAUNCH_FAILED’

2021-12-25 17:09:52.696284: W tensorflow/compiler/mlir/tools/kernel_gen/tf_cuda_runtime_wrappers.cc:59] ‘cuModuleLoadData(&module, data)’ failed with ‘CUDA_ERROR_LAUNCH_FAILED’

2021-12-25 17:09:52.696335: W tensorflow/core/framework/op_kernel.cc:1755] Internal: ‘cuModuleGetFunction(&function, module, kernel_name)’ failed with ‘CUDA_ERROR_INVALID_HANDLE’
Segmentation fault (core dumped)

Hi,

Usually, CUDA_ERROR_LAUNCH_FAILED is caused by the incorrect GPU or CUDA version.
Do you set up the device with JetPack 4.6 as well?

You can confirm this with the following command:

$ apt-cache show nvidia-jetpack

Thanks.

Hi friend @AastaLLL .

First of all I want to thank you for the help you have given me with the solution of my problem.

second, please see what appears in the console when I type the code you suggest. is the version correct? :

Package: nvidia-jetpack
Version: 4.6-b199
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.6-b199), nvidia-opencv (= 4.6-b199), nvidia-cudnn8 (= 4.6-b199), nvidia-tensorrt (= 4.6-b199), nvidia-visionworks (= 4.6-b199), nvidia-container (= 4.6-b199), nvidia-vpi (= 4.6-b199), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.6-b199_arm64.deb
Size: 29368
SHA256: 69df11e22e2c8406fe281fe6fc27c7d40a13ed668e508a592a6785d40ea71669
SHA1: 5c678b8762acc54f85b4334f92d9bb084858907a
MD5sum: 1b96cd72f2a434e887f98912061d8cfb
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Version: 4.6-b197
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.6-b197), nvidia-opencv (= 4.6-b197), nvidia-cudnn8 (= 4.6-b197), nvidia-tensorrt (= 4.6-b197), nvidia-visionworks (= 4.6-b197), nvidia-container (= 4.6-b197), nvidia-vpi (= 4.6-b197), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.6-b197_arm64.deb
Size: 29356
SHA256: 104cd0c1efefe5865753ec9b0b148a534ffdcc9bae525637c7532b309ed44aa0
SHA1: 8cca8b9ebb21feafbbd20c2984bd9b329a202624
MD5sum: 463d4303429f163b97207827965e8fe0
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Hi,

Do you get the error when running the load_model(..) model command?
If yes, have you tried to load the model on the desktop before? Is it working?

If this issue is specified to the Jetson, would you mind sharing the model with us so we can give it a check?

Thanks.

Hi friend @AastaLLL

First I want to tell you that load_model() does work

Second, I want to tell you in detail what my problem is.

Together with my team, we made an algorithm that can detect tapes in a fabric made by machines in a production plant.
To create the neural network, images of the tapes were captured and the neural network was trained with YOLO DARKNET,

The extension of the weights is: .weights
Then this extension was converted to: .h5

The algorithm worked perfectly on my personal laptop (Windows 10), but it didn’t work on my Jetson Nano with the file containing the weights.
However, with another file it worked (file containing some weights, it was downloaded from one of the projects of: Jetson Community Projects)

The only difference is that the file that doesn’t work is 247.0 MB and the one that works is 786.3 kB I attach this file
Robust_BODY18.h5 (767.9 KB)
.

@AastaLLL How can I share that big file of 247.0 MB? maybe by email?

Friend I hope you can help me with this. My idea is to install a Jetson Nano in each machine to detect tapes on a fabric.

Or do you think that a Jetson Nano is not suitable for use in an industrial production plant and could you suggest me to use another device?

To give you a clearer example:

…Doesn’t work on my Jetson Nano but works on my personal laptop (Windows 10)…

import tensorflow as tf
from keras.models import load_model

print(‘Program Start’)

model = load_model(‘/home/steven/Documentos/deteccion_cintas/tape_v1.h5’)

print(f’‘’

{model}

‘’')

print(‘End Of Program’)

…It works on my Jetson Nano and my personal laptop (Windows 10)…

import tensorflow as tf
from keras.models import load_model

print(‘Program Start’)

model = load_model(‘/home/steven/Documentos/deteccion_cintas/Robust_BODY18.h5’)

print(f’‘’

{model}

‘’')

print(‘End Of Program’)

Hi,

Do you have the original .cfg/.weight model?
If yes, it’s recommended to use Deepstream/TensorRT for inference directly.

Since you want to use Nano, it can get you a much better memory utilization and performance.

/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/

Thanks.

Hi friend @AastaLLL

I can’t find this route.

steven@steven-desktop:/opt/nvidia$ pwd
/opt/nvidia
steven@steven-desktop:/opt/nvidia$ ls -l
total 20
drwxr-xr-x 6 root root 4096 ago 13 00:05 jetson-io
drwxr-xr-x 2 root root 4096 ago 13 00:05 l4t-bootloader-config
drwxr-xr-x 3 root root 4096 ago 13 00:05 l4t-gputools
drwxr-xr-x 2 root root 4096 jul 21 2021 l4t-usb-device-mode
drwxr-xr-x 7 root root 4096 ago 13 00:42 vpi1

Thanks.

Hi,

Do you select Deepstream when installing components from the SDK Manager?
If not, you can install it via apt tool directly.

$ sudo apt update
$ sudo apt install deepstream-6.0

Thanks.

1 Like

Hi,
Thank you very much friend for your help and support, this worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.