Op type not registered 'TRTEngineOp' AIAA TF Engine

Hi, during the prediction (called by Clara plugin for slicer3D), I get the following error:

[2020-10-13 06:47:19.788971] [pid 1529:tid 140500124292864] [AIAA_ERROR] (nvmidl.apps.aas.www.api.api_v1:handle_error) - Op type not registered 'TRTEngineOp' in binary running on d2e6ea89ae4b. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
[2020-10-13 06:47:19.788983] [pid 1529:tid 140500124292864] Traceback (most recent call last):
[2020-10-13 06:47:19.788984] [pid 1529:tid 140500124292864]   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1950, in full_dispatch_request
[2020-10-13 06:47:19.788985] [pid 1529:tid 140500124292864]     rv = self.dispatch_request()
[2020-10-13 06:47:19.788986] [pid 1529:tid 140500124292864]   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1936, in dispatch_request
[2020-10-13 06:47:19.788987] [pid 1529:tid 140500124292864]     return self.view_functions[rule.endpoint](**req.view_args)
[2020-10-13 06:47:19.788988] [pid 1529:tid 140500124292864]   File "apps/aas/www/api/api_v1.py", line 357, in api_v1_inference
[2020-10-13 06:47:19.788989] [pid 1529:tid 140500124292864]   File "apps/aas/www/api/api_v1.py", line 263, in run_inference
[2020-10-13 06:47:19.788989] [pid 1529:tid 140500124292864]   File "apps/aas/www/api/api_v1.py", line 182, in run_infer
[2020-10-13 06:47:19.788990] [pid 1529:tid 140500124292864]   File "apps/aas/actions/inference_engine.py", line 59, in run
[2020-10-13 06:47:19.788991] [pid 1529:tid 140500124292864]   File "apps/aas/actions/inference_engine.py", line 153, in _run_inference
[2020-10-13 06:47:19.788992] [pid 1529:tid 140500124292864]   File "apps/aas/inference/tf_inference.py", line 65, in inference
[2020-10-13 06:47:19.788993] [pid 1529:tid 140500124292864]   File "apps/aas/inference/tf_inference.py", line 48, in _init_context
[2020-10-13 06:47:19.788994] [pid 1529:tid 140500124292864]   File "components/model_loaders/tf_model_loader.py", line 55, in load
[2020-10-13 06:47:19.788994] [pid 1529:tid 140500124292864]   File "components/model_loaders/frozen_graph_loader.py", line 38, in load_graph
[2020-10-13 06:47:19.788995] [pid 1529:tid 140500124292864]   File "utils/graph_utils.py", line 29, in load_frozen_graph
[2020-10-13 06:47:19.788996] [pid 1529:tid 140500124292864]   File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
[2020-10-13 06:47:19.788997] [pid 1529:tid 140500124292864]     return func(*args, **kwargs)
[2020-10-13 06:47:19.788998] [pid 1529:tid 140500124292864]   File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def
[2020-10-13 06:47:19.788999] [pid 1529:tid 140500124292864]     graph._c_graph, serialized, options)  # pylint: disable=protected-access
[2020-10-13 06:47:19.789001] [pid 1529:tid 140500124292864] tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'TRTEngineOp' in binary running on d2e6ea89ae4b. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

Start up docker script:

export NVIDIA_RUNTIME="--runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0"
export OPTIONS="--shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864"
export SOURCE_DIR=/home/pawel/Desktop/clara/models_and_configs
export MOUNT_DIR=/aiaa-experiments
export LOCAL_PORT=8088
export REMOTE_PORT=80
export DOCKER_IMAGE="nvcr.io/nvidia/clara-train-sdk:v3.0"

docker run $NVIDIA_RUNTIME  $OPTIONS -it --rm \
   -p $LOCAL_PORT:$REMOTE_PORT \
   -v $SOURCE_DIR:$MOUNT_DIR \
   $DOCKER_IMAGE \
   /bin/bash

Start aas:

start_aas.sh --engine AIAA

Put model(Success):
curl -X PUT "http://127.0.0.1:8088/admin/model/supermodel" -F "config=@config.json;type=application/json" -F "data=@model.zip"

Config:

{
  "version": "2",
  "type": "segmentation",
  "labels": [
    "superlabel"
  ],
  "description": "My super cool segmentation model",
  "inference": {
    "image": "image",
    "scanning_window": true,
    "batch_size": 4,
    "name": "TFInference",
    "roi": [
      512,
      512,
      1
    ],
    "tf": {
      "input_nodes": {
        "image": "data"
      },
      "output_nodes": {
        "model": "sigmoid/Sigmoid"
      }
    }
  },
  "pre_transforms": [
    {
      "name": "LoadNifti",
      "args": {
        "fields": "image"
      }
    },
    {
      "name": "ConvertToChannelsFirst",
      "args": {
        "fields": "image"
      }
    },
    {
      "name": "ScaleByResolution",
      "args": {
        "fields": "image",
        "target_resolution": [
          1.0,
          1.0,
          1.0
        ]
      }
    },
    {
      "name": "ScaleIntensityRange",
      "args": {
        "fields": "image",
        "a_min": -128,
        "a_max": 384,
        "b_min": 0.0,
        "b_max": 1.0,
        "clip": true
      }
    }
  ]
}

Response:

{
"name": "aortaai3", 
"labels": ["aorta"],
 "description": "My super cool segmentation model",
 "version": "2", 
"type": "segmentation"
}

Do you have any ideas?

Hi,

Thank you for your interest in AIAA. It appears something might be happening with the model. Would you be able to tell us if the model was trained using Clara Train? Also, would you be able to share the contents of model.zip ?