2D segmentation model in Clara Deploy

Hello Everyone,

We are trying to use our 2D model in Clara Deploy. Currently we have our model named “seg_model” in savedmodel format put under /clara-io/models and sampleDist/models folder along with default v_net and liver_Segmentation models.

Here is what I did

  1. Created a new folder (app_seg_model) under “Clara-reference-app” folder (This is the folder where you have provided default apps like app_livertumor, app_v_net etc)

  2. I reused the same files from other apps but made some modifications to app.py, main.py, Dockerfile and run_seg_model_docker.sh (our model name is seg_model). The changes I made are only on model name, input format from .mhd to .png and app names etc). Basically the file almost remained the same except few changes to suit our model. Please note our data is a 2d data. So I updated the target shape as well

  3. Created a helm chart for user defined containers “user-ai” as mentioned in doc and am able to see my “user-ai” container details in values.yaml

  4. Then I created a new workflow “user-workflow” by following the doc and assigned them a clara-ae-title “CT_AI”, randomuid and destination in dicom-server-config file

I guess step 3 and step 4 may not be necessary to execute “run_seg_model_docker.sh” but I just created them as we are interested in connecting it to a PACS system like ORTHANC

Issues encountered - Given below

  1. When I execute “run_seg_model_docker.sh”, I encounter the below error
From cffi callback <function nvidia_clara_python_wfd_execute_callback at 0x7fcf0a6a4620>:
Traceback (most recent call last):
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/clara/__init__.py", line 74, in nvidia_clara_python_wfd_execute_callback
    _callbacks["execute_cb"](payload_obj)
  File "app_seg_model/main.py", line 63, in execute
    app.inference(payload, args)
  File "/app/app_seg_model/app.py", line 42, in inference
    inference_results = self.inference_context.run({'input': (feed_data,)}, {'output': (1,)}, 1)
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/__init__.py", line 830, in run
    self._prepare_request(inputs, outputs, batch_size, contiguous_input)
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/__init__.py", line 616, in _prepare_request
    _crequest_infer_ctx_options_add_raw(self._ctx, options, output_name)))
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/__init__.py", line 182, in _raise_if_error
    raise ex
tensorrtserver.api.InferenceServerException: [ 0] unknown output 'output' for 'seg_model'
  1. I hope Clara deploy supports 2D models and all sorts of file formats like png, jpeg etc.

  2. I guess the issue is because we are trying to run a 2D segmentation model. Some parameters has to be changed. Can you please let us know which parameter should I be changing? I mean my output is 1 as I am expecting a raw file (png file) as output. In addition, I am able to see some empty png file in output folder of the app as well. But I also encounter this error. Can you please share any tutorial or procedure which can help us find out the places where update has to be made?

  3. What about data preprocesssing? Should our input data only be of the shape mentioned in config.pbtxt file? In real time, we might have images of different shapes coming in, how do we do this? Does clara does perform any standardization on input images?

  4. Can one png file be converted to dicom? I mean just one png file of skin lesion can it be converted to dicom ?

Hello,

Currently what I am trying to do is execute “run_seg_model_docker.sh” successfully. My model config.pbtxt has input dims as 224,224,3 because that’s how our model was built and trained.

I tried passing input file .mha format but then encountered the error that “input shape doesn’t match with the input image seg_model”. I get the below error as shown below. Please note that I have used some print statements to debug

payload is  <clara.payload.Payload object at 0x7f9adc664898>
payload inputs is  (<clara.stream.Stream object at 0x7f9a59c71710>,)
IS by selva is  (<clara.stream.Stream object at 0x7f9a59c71710>,)
IP by selva is  /app/input/seg_image.mha
IK by selva is  1
Input file: /app/input/seg_image.mha
feed_data shape is  (1, 224, 224, 3, 1)
From cffi callback <function nvidia_clara_python_wfd_execute_callback at 0x7f9adc01d6a8>:
Traceback (most recent call last):
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/clara/__init__.py", line 74, in nvidia_clara_python_wfd_execute_callback
    _callbacks["execute_cb"](payload_obj)
  File "app_seg_model/main.py", line 65, in execute
    app.inference(payload, args)
  File "/app/app_seg_model/app.py", line 52, in inference
    inference_results = self.inference_context.run({'input_image': (feed_data,)}, {'activation_81/Sigmoid:0': (1,)}, 1)
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/__init__.py", line 833, in run
    self._last_request_id = _raise_if_error(c_void_p(_crequest_infer_ctx_run(self._ctx)))
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/__init__.py", line 182, in _raise_if_error
    raise ex
<b>tensorrtserver.api.InferenceServerException: [inference:0 0] unexpected shape for input 'input_image' for model 'seg_model'</b>

But if I modify my config.pbtxt, the model may not be ready. It just keeps retrying and fails. I extended the duration to wait for the model as well.

So, how do I avoid the feed_data getting this shape and match with my model input shape?