the application has to shutdown. application side should also report segmentation fault, which is expected to force-stop the application.
Yes, this is the crux of the problem. I need part of my application to continue displaying a UI on the screen. I am flexible on ways to split up the part that needs to shut down from the rest of it, but there needs to be some separation which can be reconnected to a new instance of the camera part.
may I know which Jetpack release version you’re working with.
Jetpack 5.1.2 (L4T 35.4.1)
what’s your test pipeline? is it repo’ed with simple gst pipeline with nvarguscamerasrc plugin?
As I mentioned, nvarguscamerasrc stores global Argus resources in shared_ptr<CameraProviderContainer> g_cameraProvider, which means I can’t even reload the kernel modules to re-initialize the camera hardware in the first place. If I modify execute to just create a new CameraProviderContainer each time, then I can reload the kernel modules after removing nvarguscamerasrc from the GStreamer pipeline. However, each time I create a new nvarguscamerasrc after doing this, it leaks some memory (and dmabuf and other file descriptors, etc). This is reproducible by simply creating and destroying a simple pipeline in a main function. Is this enough detail, or do you need me to actually write out this example?
>> Q3. Displaying the camera and the UI on separate planes
this looks related to your app implementation.
anyways, may I have more details or your expectation.
This is just one of my ideas: run separate GStreamer pipelines to separate nvdrmvideosink elements with different planes. I tried prototyping that, and it didn’t work, and according to Orin Nano With NvDRM Overlay - #15 by DaneLLL it’s unsupported.
>> Q4. The old
NvBufferhas one, but that seems to have been removed in the latest Jetpack releases.may I know which Jetpack release version you’re working with.
please share the code snippets as well for quick checking.
I’m using 5.1.2. I never actually wrote this version, because I can’t find the nvbuf_utils APIs in this release. According to Deprecated Nvbuf_utils is removed from JetPack 5.1.2 · Issue #169 · dusty-nv/jetson-utils · GitHub that’s expected. Also I found the “nvbuf_utils to NvUtils Migration Guide” which is further evidence for nvbuf_utils/NvBuffer being removed. How to share buffers across processes using jetpack 5 - #10 by DaneLLL states that there’s no direct replacement for the IPC functionality (EGLStream is mentioned later in that thread, which I have also tried, but it leaks memory).
>> Q6. Copying data with the CPU to get it between two processes is going to add more latency
please share your actual use-case for reference, this may due to above Q4 about NvBuffer copy mechanism.
Like I’ve said before, I’m pretty flexible with my use case. If there’s any way to send image buffer data between processes without routing it through the CPU’s limited memory bandwidth and cache, which supports reconnecting without leaking memory, I can probably make use of it.
My baseline idea here is to set up a shared memory region and use a UNIX domain socket to coordinate usage of it. I don’t see any way to get the image data out of Argus besides IEGLOutputStream, so I guess I’ll use that and then copy from there into an NvBufSurface via IImageNativeBuffer’s createNvBuffer/copyToNvBuffer, and then from there copy it into the shared memory region with NvBuffer2Raw (if there’s a faster way to get the image into a memory region I control, I would love to hear about it). Then once it’s in my other process, I can use Raw2NvBufSurface to create the NvBufSurface to feed into my GStreamer pipeline (again, would love to hear of a faster way of doing this). I haven’t implemented this part yet, because I’m hoping that NVIDIA has a usable IPC mechanism somewhere, and I’m also sure all the copies will add some latency which I’m hoping to avoid.