Facing issue converting PB (protoburf) file to UFF file of a Keras (Tensorflow back end) model

Hi All,

My name is Ark and I am trying to reduce the latency (time taken by the model to score an observation) of a RNN model (Already trained on KERAS , using Tensorflow as back end). In order to do this I am importing my model into TensorRT for referencing/deployment in real time of this model.

System Specifications-
Server - RHEL - 7.5
Language-Python 2.7
TensorRT - 5.0
CUDA 10.0 toolkit
CUDNN 7.3

In order to do the same, I tried to convert the keras model file to pb (protobuf) file and then to uff to score it using tensorRT. The conversion to pb was successful but when I tried to convert it to uff, I got the following error:

raise UffException(“Const node conversion requested, but node is not Const\n” + str(tf_node))
uff.model.exceptions.UffException: Const node conversion requested, but node is not Const

Kindly suggest a solution for the same.

Thanks,
Ark

Hello, It’d help us debug if you can provide a small repro that contains the source and model you are trying to convert to UFF that exhibit this error.

Hey Ad, i have same problem. When i try conver tensoflow RNN model.pd to model.uff, appear error:

uff.model.exceptions.UffException: Const node conversion requested, but node is not Const
name: “bidirectional_1/while_1/BiasAdd_2/Enter”

input_shape = (img_w, img_h, 1)

# Make Network
inputs = Input(name='the_input', shape=input_shape, dtype='float32')

# Convolution layer (VGG)
inner = Conv2D(64, (3, 3), padding='same', name='conv1', kernel_initializer='he_normal', kernel_regularizer = regularizers.l2(l_cnn))(inputs)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Dropout(drop)(inner)
inner = MaxPooling2D(pool_size=(2, 2), name='max1')(inner)

inner = Conv2D(128, (3, 3), padding='same', name='conv2', kernel_initializer='he_normal', kernel_regularizer = regularizers.l2(l_cnn))(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Dropout(drop)(inner)
inner = MaxPooling2D(pool_size=(2, 2), name='max2')(inner)

# inner = Conv2D(256, (3, 3), padding='same', name='conv3', kernel_initializer='he_normal')(inner)
# inner = BatchNormalization()(inner)
# inner = Activation('relu')(inner)
inner = Conv2D(256, (3, 3), padding='same', name='conv4', kernel_initializer='he_normal', kernel_regularizer = regularizers.l2(l_cnn))(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Dropout(drop)(inner)
inner = MaxPooling2D(pool_size=(1, 2), name='max3')(inner)

# inner = Conv2D(512, (3, 3), padding='same', name='conv5', kernel_initializer='he_normal')(inner)
inner = Conv2D(256, (3, 3), padding='same', name='conv5', kernel_initializer='he_normal', kernel_regularizer = regularizers.l2(l_cnn))(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Dropout(drop)(inner)
# inner = Conv2D(512, (3, 3), padding='same', name='conv6')(inner)
# inner = BatchNormalization()(inner)
# inner = Activation('relu')(inner)
inner = MaxPooling2D(pool_size=(1, 2), name='max4')(inner)

inner = Conv2D(512, (2, 2), padding='same', kernel_initializer='he_normal', name='con7', kernel_regularizer = regularizers.l2(l_cnn))(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Dropout(drop)(inner)
print(inner.shape)

# CNN to RNN
inner = Reshape(target_shape=((27, 1024)), name='reshape')(inner)
inner = Dense(64, activation='relu', kernel_initializer='he_normal', name='dense1')(inner)
inner = Dropout(drop)(inner) 
# RNN layer

lstm = Bidirectional(LSTM(512, return_sequences=True, kernel_initializer='he_normal', name='lstm1', dropout=drop, recurrent_dropout=drop, recurrent_regularizer = regularizers.l2(l_lstm)))(inner) 

# transforms RNN output to character activations:
inner = Dense(num_classes, kernel_initializer='he_normal',name='dense2')(lstm)
y_pred = Activation('softmax', name='softmax')(inner)