Is it possible to construct a distributed application in which the fragments contain operators that are different languages? ie Fragment 1 (C++ Operators) → Fragment 2 (Python Operators) → Fragment 3 (C++ Operators)
The case of Fragment 1 in C++ and Fragment 2 in Python should work as long as Python-based fragment graph composition (fragment 2) is what C++ based distributed app composition expects (assuming C++ based app has fragment 1 and 2, because App driver is collecting fragment graph and connection information). We can extrapolate that to your use case of Fragment 1 in C++, Fragment 2 in Python and Fragment 3 in C++.
Due to the current SDK limitation, if we want to launch fragment1 with C++ and fragment2 in Python, C++ application needs to have an implementation of operator & fragment of fragment2 (at least the definition of fragment2 with same fragment id and stub implementation of fragment2’s compose()
method is needed). In the future once we improve this SDK implementation, C++ application’s compose method will be able to have only the implementation of fragment1 and can use an empty Fragment object in Application::compose() method for fragment2.
As a basic example, the following is how we can execute both the C++ fragment (fragment1) and the Python fragment (fragment2) with video_replayer_distributed
app in v0.6:
# In terminal 1
./examples/video_replayer_distributed/cpp/video_replayer_distributed --driver --worker --fragments fragment1
# add `--address <driver's host IP>` option if terminal 1 and 2 are in the different nodes.
# In terminal 2
python examples/video_replayer_distributed/python/video_replayer_distributed.py --worker --fragments fragment2
# add `--address <driver's host IP>` option if terminal 1 and 2 are in the different nodes.
If you modify Python’s video_replayer_distributed.py like below with or without Fragment1,
using the same commands above, you can process the message from C++ operator in Python if the message is what Python operator can handle, such as Holoscan Tensor type which is interoperable with GXF Tensor and DLPack/array_interface.
Note on overhead: sending data between two fragments could have an overhead. However, if those two fragments are executed in a single node and the message type is tensor, it would use UCX’s IPC transport to send tensor data so overhead would be greatly reduced.