Inference on custom model

I’m using the Clara train v3 docker and used the custom model code provided in the docs : Bring your own components. Training works fine but when trying to do inference from the exported model, I get the following error :

KeyError: “The name ‘NV_IS_TRAINING:0’ refers to a Tensor which does not exist. The operation, ‘NV_IS_TRAINING’, does not exist in the graph.”

I’ve done inference on existing models and those seem to work fine, I’m guessing something should be included in the code when writing a custom model for inference?

Hello Aravind,

Thank you for your interest in Clara Train and sorry for your the trouble. Unfortunately this is a “bug” in our Clara Train code where it assumes the existence of this TF placeholder layer in the graph since it needs to feed a value to it (i.e. the value of False or True to indicate whether the current mode is training or not). This will be fixed in the upcoming release of Clara Train in the coming weeks; for now, one way to work around this problem is to connect/use this TF placeholder in your BYOC network.

1 Like

Makes sense. Thanks for the quick response