ERROR: UFFParser: Validator error: localresponsenorm0: Unsupported operation _LRN

Hi,

I am currently working on the porting of a tensorflow neuron network (modified googlenet) on a Jetson TX2.

But I have a problem during the parsing of the *.uff file. The Uffparsers returns this error to me:

ERROR: UFFParser: Validator error: localresponsenorm0: Unsupported operation _LRN

I get this error with this code:

auto parser = nvuffparser::createUffParser();
parser->registerInput(inputNode, nvinfer1::DimsCHW(nbChannel, imageWidth, imageHeight));
parser->registerOutput(outputNode);

for the instanciation of the uffparser and:

nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(gLogger);
nvinfer1::INetworkDefinition* networkDefinition = builder->createNetwork();

if(!parser->parse(uffFile, *networkDefinition, nvinfer1::DataType::kFLOAT)){
    logger->log(nvinfer1::ILogger::Severity::kERROR, "Fail to parse");
}
builder->setMaxBatchSize(this->maxBatchSize);
builder->setMaxWorkspaceSize(MAX_WORKSPACE);

nvinfer1::ICudaEngine* cudaEngine = builder->buildCudaEngine(*networkDefinition);
if(!cudaEngine){
    logger->log(nvinfer1::ILogger::Severity::kERROR, "Unable to create engine");
}

networkDefinition->destroy();
builder->destroy();
return cudaEngine;

for the parser usage.

My code run on an Ubuntu 16.04 and this is the result for my tensor RT version:

nicolas@Swann:~$ dpkg -l | grep TensorRT
ii  libnvinfer-dev                                         4.0.1-1+cuda9.0                              amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                     4.0.1-1+cuda9.0                              amd64        TensorRT samples and documentation
ii  libnvinfer4                                            4.0.1-1+cuda9.0                              amd64        TensorRT runtime libraries
ii  python-libnvinfer                                      4.0.1-1+cuda9.0                              amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                  4.0.1-1+cuda9.0                              amd64        Python development package for TensorRT
ii  python-libnvinfer-doc                                  4.0.1-1+cuda9.0                              amd64        Documention and samples of python bindings for TensorRT
ii  python3-libnvinfer                                     4.0.1-1+cuda9.0                              amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                 4.0.1-1+cuda9.0                              amd64        Python 3 development package for TensorRT
ii  python3-libnvinfer-doc                                 4.0.1-1+cuda9.0                              amd64        Documention and samples of python bindings for TensorRT
ii  tensorrt                                               3.0.1-1+cuda9.0                              amd64        Meta package of TensorRT
ii  uff-converter-tf                                       4.0.1-1+cuda9.0                              amd64        UFF converter for TensorRT package

and

nicolas@Swann:~$ nvidia-smi 
Mon Jan 22 16:46:05 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 680     Off  | 00000000:01:00.0 N/A |                  N/A |
| 20%   35C    P8    N/A /  N/A |    282MiB /  1996MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+

from the documentation it appears that the LRN layers are not supported by Tensor RT for Tensorflow. Is it correct ?

Can I get around this problem ?

PS: The freeze of the tensorflow model and its conversion to * .uff file seems to have worked perfectly

Thanks for help

Hi,

LRN is not a supported operation of UFF parser.

For a non-supported operation, current workaround is to use Caffe framework.
We have Plugin API for Caffe user to implement custom code.

Thanks.

Hi,

Thank you for your quick reply.

Did you know if this feature will be release for tensorflow ? And if so when (some weeks, some months or some years) ?

Thanks again

Hi,

We can’t disclosure any schedule.
Please pay attention to our announcement.

Thanks.