Hello,
I am a new with Holoscan and want to ask some questions on the gxf entities video files.
Do I understand it correct that this is video which was split on different parts to be processed in a graph?
In order to get the gxf entities video from the original one there is a Python script. Is there a C++ implementation?
How can I restore video back from the gxf entities video to the original format?
If I want to process video with some a library (to work with the video frames) after reading it from the disc with the video-replayer, how can I do this? Do I need to create a C++ style operator and receive the output from the video_replayer? In this case can I treat the input as frames or not?
Hi Valeriy, just to make sure we’re on the same page, we’re talking about file pairs such as {surgical_video.gxf_entities, surgical_video.gxf_index}.
The original video is converted to a GXF entities for playback with the stream_playback operator. There is a new video playback operator that is able to take in H264 format so that converting to GXF entities is no longer required if you choose to use the new operator, for an example please see holohub/applications/h264_endoscopy_tool_tracking at main · nvidia-holoscan/holohub · GitHub. Please note that: The H.264 video decode operator does not adjust framerate as it reads the elementary stream input. As a result the video stream will be displayed as quickly as the decoding can be performed. This feature will be coming soon to a new version of the operator.
The convert_video_to_gxf_entities.py script is only in Python but utilizing the script should be a one-time event for each video you want to convert to GXF entities, and the conversion is ahead of not at the same time as app runtime.
To convert the GXF entities back, the reverse script is coming at the end of July.