Hi supporters,
I faced this problem even enabled GPU fallback.
ERROR: UFFParser: Validator error: up_sampling2d_3/ResizeBilinear: Unsupported operation _ResizeBilinear
ERROR: demo tensorrt: Fail to parse
ERROR: demo tensorrt: Model load failed
I created/trained model using Keras. Then converting h5 => pb => uff. And created a C++ app to load uff but faced the error above. I enabled GPU fallback mode so I think it should fallback to GPU but it not.
So I do something wrong?
Thanks.
Hi,
This error indicates that your layer (_ResizeBilinear) is not supported by either DLA or TensorRT.
Please noticed that not all TensorFlow is supported in TensorRT.
You can find our supported matrix here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html
Maybe deconv can meet your requirement.
Thanks.
Hi,
thanks for your reply. But can we fallback it to tensorflow?
Hi,
But I think it not match my desiration. I need to use C++/TensorRT/DLA if layers not supported DLA, it should fallback to tensorflow GPU.
Thank you for your reply.
Hi,
Another alternative is to port the _ResizeBilinear implementation into TensorRT as a plugin layer.
You can find the CUDA implementation in the TensorFlow GitHub and wrap it as a custom layer into TensorRT.
Thanks.
Hi,
Thank you. Maybe I will use Deconv.
I have another question. The input of TensorRT is NCHW so how about the output? Is it NCHW or NHWC?
Thanks.
Hi,
The output is decided by the input format and the executed layers.
In general, it is an NxK vector, which K indicates the value of different classes.
Thanks.
Do we have a sample implementation to add RESIZE plugin. MOdifying it based on the sample provided in sampleUffSSD doesn’t seem to work. It would be great if there are pointers on how to replicate it for a specific operation.
Thanks.
Hi,
You can find the RESIZE implementation in the TensorFlow GitHub.
Add follow our sample to add in as plugin:
[url]Sample Support Guide :: NVIDIA Deep Learning TensorRT Documentation
Thanks.