Unexpected inference input using inference server 20.01

While using Nvidia driver 440 and Triton inference server’s docker version 20.01, I get an error regarding the input:

E0819 12:32:33.427064 1 model_repository_manager.cc:832] failed to load ‘023426’ version 1: Invalid argument: unexpected inference input ‘input:0’, allowed inputs are: batch_size, phase
error: creating server: INTERNAL - failed to load all models

This network can run on TF 1.15 which is supported by this docker. Any ideas how to proceed?