Chest X-Ray pipeline error


After running pipeline no output generated.

GPU - Tesla K80


kubectl logs chestxray-test-kzdvf-848878767 main
WARNING: Logging before flag parsing goes to stderr.
W0430 12:19:16.683844 140062917740352] From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/ The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.

W0430 12:19:16.684168 140062917740352] From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/ The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/clara/", line 174, in nvidia_clara_python_cpd_execute_callback
    success = driver.execute_handler(payload)
  File "/usr/local/lib/python3.6/dist-packages/clara/", line 124, in execute_handler
    return self._execute_handler(self, payload)
  File "app_base_inference/", line 53, in execute
    app = App(runtime_env=RuntimeEnv())
  File "/app_base_inference/", line 93, in __init__
  File "/app_base_inference/", line 159, in setup
    self.pre_transforms = [self.build_component(t) for t in pre_transform_config]
  File "/app_base_inference/", line 159, in <listcomp>
    self.pre_transforms = [self.build_component(t) for t in pre_transform_config]
  File "/app_base_inference/", line 460, in build_component
    class_path = ComponentModuleNames().get_module_name(name_) + '.{}'.format(name_)
  File "utils/", line 14, in __init__
  File "utils/", line 25, in _create_classes_table
  File "/usr/lib/python3.6/importlib/", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
{"event": {"category": "operator", "name":"processing_started", "level": "info", "timestamp": "20200430T121917.676Z"}, "message": "AI-base_inference"}
[PERF] AI-base_inference Start Time: 1588249157676
{"event": {"category": "operator", "name":"processing_started", "level": "info", "timestamp": "20200430T121917.705Z", "stage": "application setup"}, "message": "AI-base_inference Application setup"}
[PERF] AI-base_inference Application setup Start Time: 1588249157705
{"event": {"category": "operator", "name":"processing_ended", "level": "info", "timestamp": "20200430T121929.529Z", "elapsed_time": 11853}, "message": "AI-base_inference"}
[PERF] AI-base_inference End Time: 1588249169529
[PERF] AI-base_inference Elapsed Time (ms): 11853
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "components/transforms/", line 3, in <module>
  File "tlt2/src/components/transforms/libs/", line 25, in <module>
  File "tlt2/src/components/transforms/libs/", line 61, in __init__
  File "cupy/cuda/function.pyx", line 178, in cupy.cuda.function.Module.load_file
  File "cupy/cuda/function.pyx", line 182, in cupy.cuda.function.Module.load_file
  File "cupy/cuda/driver.pyx", line 177, in cupy.cuda.driver.moduleLoad
  File "cupy/cuda/driver.pyx", line 82, in cupy.cuda.driver.check_status
cupy.cuda.driver.CUDADriverError: CUDA_ERROR_NO_BINARY_FOR_GPU: no kernel image is available for execution on the device
[cpdriver] Pipeline driver `execute` callback returned error code (1). (1, is_err=True)
[cpdriver] Driver execution completed with 1 error(s). (-1, is_err=True)

Hello Cheivan,

Thank you for your interest in Clara Deploy and sorry for your troubles. I believe the issue you’re having is primarily due to the HW you’re using. Clara Deploy only supports GPU HW Pascal or newer. K80 belongs to the older Kepler family. For your convenience, here’s a link for the system requirements: