TensorRT : No conversion function registered for layer: BatchMatMul yet.

Hi All,
I am new to tensorRT. I want to convert a tensorflow UFF model to tensorrt PLAN file on Jetson Tx2 using TRT 4.1. First I try to convert Tensorflow Frozen Graph to UFF model which gives me a

warning : No conversion function registered for layer: BatchMatMul yet.

Is it really true that tensorrt doesn’t have BatchMatMul OP implemented ?

Here is the example tensorflow code I used:

import tensorflow as tf
from tensorflow.python.tools import freeze_graph
import numpy as np
batch_size, n, m, k = 10, 3, 3, 3
#A = tf.Variable(tf.ones(shape=(batch_size, n, m)))
D = tf.Variable(tf.ones(shape=(10, 3,3)))
A = tf.placeholder(tf.float32, shape=(batch_size, n, m))
B = tf.placeholder(tf.float32, shape=(batch_size, m, k))
B = 2*B
A = tf.multiply(D,A)
C = tf.matmul(A, B, name='outputnode')

with tf.Session() as sess:
        A_rand = np.random.rand(batch_size, n, m)
        B_rand = np.random.rand(batch_size, m, k)
        initialize = tf.global_variables_initializer()
        sess.run(initialize, feed_dict={A: A_rand, B: B_rand})
        tf.train.write_graph(sess.graph.as_graph_def(),'.','./batchmatmul.pbtxt', as_text=True)
        saver = tf.train.Saver()
        saver.save(sess, './batchmatmul.ckpt')
        train_writer = tf.summary.FileWriter("./logs")
        train_writer.add_graph(sess.graph)
        freeze_graph.freeze_graph('./batchmatmul.pbtxt', "",False,'./batchmatmul.ckpt', "outputnode", "save/restore_all",  "save/Const:0",'./batchmatmul.pb', True,"")

import uff

with open('./batchmatmul.pb', 'rb') as f:
    frozen_graph.ParseFromString(f.read())

uff.from_tensorflow(graphdef=frozen_graph,
                    output_filename="test.uff",
                    output_nodes="outputnode",
                    text=True)

Assuming that TensorRT doesn’t have BatchMatMul OP, should I write my own plugin in c++ to do batch matrix multiplication ?
Any help would be appreciated.
Thank you!

Hi All,

Even I am facing the same issue. Meanwhile, @Abhishek did you manage to solve this issue at your end?

I found a recent Nvidia Doc (Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation), which states that TF-TRT supports BatchMatMul operation.
But somehow I still fail to generate the UFF file (same error as mentioned above).

Is there an updated UFF version that I would need to install?
Currently I am using UFF v0.6.3

Thanks

Hi Shaumik,
I think I was able to generate the UFF file for the example code in the original question. My problem was more related to the generating the native TensorRT Plan engine and in the end I was not able to do it. I would suggest you to upgrade to tensorRT 5 (if not already on v5) and try out again.

Hi Abhishek,

technically, it does generate the UFF file, but it gives the warning message you mentioned earlier

Warning: No conversion function registered for layer: BatchMatMul yet.

The complete log message is as follows:

NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "Placeholder"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 10
      }
      dim {
        size: 3
      }
      dim {
        size: 3
      }
    }
  }
}
, name: "Placeholder_1"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 10
      }
      dim {
        size: 3
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

Using output node outputnode
Converting to UFF graph
Warning: No conversion function registered for layer: BatchMatMul yet.
Converting outputnode as custom op: BatchMatMul
No. nodes: 8
UFF Output written to test.uff

And as far as TensorRT version is concerned, I do have v5 already, but this seems to be independent of TensorRT.
I am using Tensorflow v1.12.0 and UFF v0.6.3 provided with the TensorRT v5 package.

I would be really curious if you could generate the UFF file without this warning message.

Hi Shaumik,
Unfortunately I don’t have access to any jetson device right now, So I am not able to test it again. But the last time I ran UFF generation I was getting the same warning as you are getting.
I also think I was able to run the UFF file. For me TensorRT plan was more important because native TensorRT engine has more speed gain and less memory consumption compared to TF-TRT.

Hi @Shaumik, @abhishek,

It seems that according to the TensorRT documentation here, the BatchMatMul operation isn’t supported afterall for TensorRT.

When converting my frozen Tensorflow protocol buffer (.pb) file to .uff, I get the warning

Warning: No conversion function registered for layer: BatchMatMulV2 yet.
Converting ssr_function/MatMul as custom op: BatchMatMulV2

Although a .uff file is generated, when converting it to a .plan file, I get the following error:

[Error] UffParser: Validator error: ssr_function/MatMul_1: Unsupported operation _BatchMatMulV2
[Error] Failure while parsing UFF file

I tried to get around using BatchMatMul by using tf.keras.backend.dot() but it seems that there is a tf.unpack operation which isn’t supported by TensorRT either which generates similar warnings:

Warning: No conversion function registered for layer: Unpack yet.
Converting ssr_function/unstack as custom op: Unpack

and

[Error] UffParser: Validator error: ssr_function/unstack_4: Unsupported operation _Unpack
[Error] Failure while parsing UFF file