Formatting Gstreamer pipeline to allow robust streaming

I am using a TX2i board together with an Elroy Carrier to capture raw data stream from a v4l2 camera, compress the video with H265 encoding, and send the encoded video to another device through serial (UART). Afterwards, on the other device, I am decoding and separating the video file into jpg images. I am facing two challenges:
1- The mp4 data I am sending from the TX2i is not completely identical to the data I am receiving on the other end. They are like around 99.9% identical, with just a few hundred bytes missing over a large file (compared through hex editor). However, these missing bytes are making the received video data unplayable. The original file is playable. I want either the encoding or decoding pipeline to be robust towards minor imperfections in the file (missing bytes). What modifications should I make on the pipeline(s) to solve this?
2- I will need to start capturing the data at an arbitrary moment, meaning that the header data in the original file will be missing and the file format will be messy. I think I need a configuration that allows the pipeline to send the header metadata regularly, rather than once. What modifications should I make on the pipeline(s) to solve this?

The current pipeline capturing the video data, encoding and sending it to the application looks something like this (snippet of the C code):

// Create the elements.
data->v4l2_source = gst_element_factory_make (“v4l2src”, “v4l2_source”);
data->video_rate = gst_element_factory_make (“videorate”, “video_rate”);
data->omx_h265_enc = gst_element_factory_make (“omxh265enc”, “omx_h265_enc”);
data->h265_parse = gst_element_factory_make (“h265parse”, “h265_parse”);
data->mp4_mux = gst_element_factory_make (“mp4mux”, “mp4_mux”);
data->encode_sink = gst_element_factory_make (“appsink”, “encode_sink”);

// Configure the elements.
g_object_set (data->encode_sink, “emit-signals”, TRUE, NULL);
g_signal_connect (data->encode_sink, “new-sample”, G_CALLBACK (new_sample_encoded), data);

g_object_set (data->video_rate, “drop-only”, TRUE, “max-rate”, FPS, NULL);

g_object_set (data->omx_h265_enc, “bitrate”, 200000, “peak-bitrate”, 1000000, “preset-level”, 3, NULL);

g_object_set (data->mp4_mux, “fragment-duration”, 1000, NULL);

The current pipeline decoding the video data and separating it into its jpg frame components is like this:
gst-launch-1.0 filesrc location=/home/user/Desktop/test.mp4 ! qtdemux ! h265parse ! omxh265dec ! videoconvert ! jpegenc ! multifilesink location=/home/user/Desktop/test/%05d.jpg

Hi,
We have deprecated omx plugins. Please use nvv4l2h265enc plugin.
And for streaming use-case, it is better to use matroskamux or mpegtsmux. The qtmux or mp4mux plugin is more suitable for saving to a local file and it needs EoS signal to complete a valid file.

Thanks for the answer, however nvv4l2h265enc seems to use a different type of memory data called NVMM, and I have not been able to successfully insert this element into the pipeline in place of omxh265enc.

I have tried using matroskamux and modified the pipeline to create a MKV file instead of a MP4 file. I have constructed the following pipeline to create a streaming-friendly file:
gst-launch-1.0 v4l2src ! videorate drop-only=true max-rate=5 ! omxh265enc bitrate=200000 peak-bitrate=2000000 preset-level=3 insert-vui=true ! h265parse config-interval=-1 ! matroskamux streamable=true ! filesink location=/home/user/Desktop/video.mkv

I have also created this pipeline for decoding purpose:
gst-launch-1.0 filesrc location=/home/user/Desktop/test.mkv ! matroskademux ! h265parse ! omxh265dec ! videoconvert ! jpegenc ! multifilesink location=/home/user/Desktop/test/%05d.jpg

I have captured some video data using the 1st pipeline. I have named this file “video.mkv”. Afterwards, in order to simulate data corruption, I have deleted some bytes from 3 different sections of the file and named this file “video_corrupt.mkv”. These files are attached here:
video.mkv (447.6 KB)
video_corrupt.mkv (447.5 KB)

When I decode the original unmodified file, I obtain 100 frames. However, when I decode the corrupted file, gstreamer stops after 25th frame (presumably at the location of the first data corruption). I feel like I should be able to obtain all of the non-corrupt frames (presumably approximately 90 of them), since those frames are still “inside the file”. What can I do to force gstreamer to ignore decoding errors and recover as much data as possible from a corrupted file?

Two observations:

1- If I shortcut the pipeline from matruskademux onwards, and directly print to files from that point on, the normal video file still creates 100 files while the corrupt video file creates 25 files, the pipeline is as follows:
gst-launch-1.0 filesrc location=/home/user/Desktop/test.mkv ! matroskademux ! multifilesink location=/home/user/Desktop/test/%05d.bin
This implies that matruskademux is the culprit regarding why not all of the frames can be retrieved from the MKV file. However, I do not see any parameters that can be modified to prevent this behaviour here: matroskademux

2- I have tried to see if the behaviour is similar when using 3rd party software to decode. I have used this site to turn both of the MKV files into JPG files: Convert MKV To JPG Online Free App - MKV To JPG converter
The normal video file, as expected, produces 100 JPG files. The corrupt video file, however, creates 91 JPG files. This is consistent with my assumption that there should be around 90 non-corrupt frames inside the file, and it should be possible to retrieve them with the right tools, yet inconsistent with the fact that Gstreamer only manages to produce 25 frames from the same video data. Am I out of luck regarding being able to use Gstreamer to locally decode my video files?