Hello!
After running pipeline no output generated.
GPU - Tesla K80
Log:
kubectl logs chestxray-test-kzdvf-848878767 main
WARNING: Logging before flag parsing goes to stderr.
W0430 12:19:16.683844 140062917740352 module_wrapper.py:139] From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:101: The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.
W0430 12:19:16.684168 140062917740352 module_wrapper.py:139] From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:135: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/clara/driver.py", line 174, in nvidia_clara_python_cpd_execute_callback
success = driver.execute_handler(payload)
File "/usr/local/lib/python3.6/dist-packages/clara/driver.py", line 124, in execute_handler
return self._execute_handler(self, payload)
File "app_base_inference/main.py", line 53, in execute
app = App(runtime_env=RuntimeEnv())
File "/app_base_inference/app.py", line 93, in __init__
self.setup()
File "/app_base_inference/app.py", line 159, in setup
self.pre_transforms = [self.build_component(t) for t in pre_transform_config]
File "/app_base_inference/app.py", line 159, in <listcomp>
self.pre_transforms = [self.build_component(t) for t in pre_transform_config]
File "/app_base_inference/app.py", line 460, in build_component
class_path = ComponentModuleNames().get_module_name(name_) + '.{}'.format(name_)
File "utils/compo_module_names.py", line 14, in __init__
File "utils/compo_module_names.py", line 25, in _create_classes_table
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
{"event": {"category": "operator", "name":"processing_started", "level": "info", "timestamp": "20200430T121917.676Z"}, "message": "AI-base_inference"}
[PERF] AI-base_inference Start Time: 1588249157676
{"event": {"category": "operator", "name":"processing_started", "level": "info", "timestamp": "20200430T121917.705Z", "stage": "application setup"}, "message": "AI-base_inference Application setup"}
[PERF] AI-base_inference Application setup Start Time: 1588249157705
{"event": {"category": "operator", "name":"processing_ended", "level": "info", "timestamp": "20200430T121929.529Z", "elapsed_time": 11853}, "message": "AI-base_inference"}
[PERF] AI-base_inference End Time: 1588249169529
[PERF] AI-base_inference Elapsed Time (ms): 11853
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "components/transforms/transforms.py", line 3, in <module>
File "tlt2/src/components/transforms/libs/transforms.py", line 25, in <module>
File "tlt2/src/components/transforms/libs/cupyhelper.py", line 61, in __init__
File "cupy/cuda/function.pyx", line 178, in cupy.cuda.function.Module.load_file
File "cupy/cuda/function.pyx", line 182, in cupy.cuda.function.Module.load_file
File "cupy/cuda/driver.pyx", line 177, in cupy.cuda.driver.moduleLoad
File "cupy/cuda/driver.pyx", line 82, in cupy.cuda.driver.check_status
cupy.cuda.driver.CUDADriverError: CUDA_ERROR_NO_BINARY_FOR_GPU: no kernel image is available for execution on the device
[cpdriver] Pipeline driver `execute` callback returned error code (1). (1, is_err=True)
[cpdriver] Driver execution completed with 1 error(s). (-1, is_err=True)