Unlike Pipeline Service definition that can use
$(NVIDIA_CLARA_SERVICE_DATA_PATH) for referencing
/clara/model folder at host, using such common folder is not available for operator definition in Pipeline Definition file for now. We are sorry about that.
One workaround you can use is as follows.
It assumes that you have a root permission to the machine that Clara Platform is running.
1. Create pipeline and job as usual (but without input image data)
PAYLOAD_ID, you can find where the input is available at host machine.
/clara/payloads/<PAYLOAD_ID> by default).
In the following example, we upload a payload with only configs folder.
$ clara create jobs -n ai-test -p ead19e59043c4fa4b1e9b6dafeee040b -f input/
Payload uploaded successfully.
$ tree /clara/payloads/c0ee2b25b0b7400d90a44e0f45853059/
│ ├── configs
│ │ ├── config.txt
│ │ └── metadata.txt
2. Bind input data to the payload folder
Let’s assume that the big input image data is available at
/ssd/data, you can bind(symbolic link won’t work) the folder into the payload folder.
It assumes that the first operator would use
/input/data folder in the container for loading input image data,
sudo mkdir -p /clara/payloads/c0ee2b25b0b7400d90a44e0f45853059/input/data
sudo mount --bind /ssd/data /clara/payloads/c0ee2b25b0b7400d90a44e0f45853059/input/data
3. Start job
clara start job -j 2ec2af3c08e5459ea77363dbd64e40b5
4. Unbind folder
Don’t forget to unbind folder once the job is finished.
sudo umount /clara/payloads/c0ee2b25b0b7400d90a44e0f45853059/input/data