Problems getting EGL Stream transferred to another process on same machine

Hi all,

we managed to build an EGL Stream, get the filedescriptor and resolve the stream from this descriptor when being ‘within’ one process.

This was meant as a basic test and now we want to separate the stream producer to another process.

So our understanding is:

Consumer side:

  • Qt based
  • Get native display
  • Create stream
  • Get file descriptor from display + stream
  • Send file descriptor and stream via some inter process protocoll to producer app/process

Producer side:

  • Meant as offscreen working process, not necessarily qt based
  • Receive native display + file descriptor from consumer
  • Try to restore stream from display + file descriptor
    → PROBLEM

Problem:

  1. When using the received native display + descriptor we get a BAD_DISPLAY
  2. When we get the native display on the producer side and try this in combination with the received file descriptor we get a BAD_ATTRIBUTE which is caused (for our understanding) by the mismatch between the display from the producer side that doesn’t ‘fit’ the file descriptor

Can anyone enlighten this situation somehow? Is our basic intention correct?
We think it should be fine to use the native display from the consumer, but we can’t explain Problem 1…

Thanks in advance!,
Best,
Bodo

Hi,
There is a CUDA sample demonstrating EGLStream producer/consumer:

NVIDIA_CUDA-10.0_Samples/3_Imaging/EGLStreams_CUDA_Interop

Please check if it helps your usecase. You can install the samples by executing:

$ /usr/local/cuda-10.0/bin/cuda-install-samples-10.0.sh

Hi,

thanks for the feedback.

From a first look at the code this seems like a different approach,

as cuda-Functions are involved, like cuEGLStreamConsumerConnect and such.
And there is no usage of file descriptors, but the producer is initialized with the display and stream directly:

cudaProducerInit(&cudaProducer, g_display, eglStream, &args);

We’ll investigate deeper into this for sure, but this seems like our current approach is kind of in the wrong direction and the cuda-approach is the way to go?

Is this cuda example tested with 2 separate processes as this sample seems to handle consumer + producer in the same binary?

Thanks + Best,
Bodo

So we stripped down the code and can provide a minimal viable example for our approach to share an egl stream above process borders.

To be clear - we know how to use EGLStream consumer/producer in the same executable, but want to share the stream between two separate binaries for consumer and producer now. According to the EGL extension EGL_KHR_stream_cross_process_fd this should be possible by using “eglGetStreamFileDescriptorKHR” and “eglCreateStreamFromFileDescriptorKHR”.

From my understanding the cuda sample does not use this inter process approach, which we need for our application.

btw: i am working on the same project as bodo.pfeifer

consumer:

int main() {

    initEGLfunctions();

    std::cout << "Getting device ...\n";

    EGLBoolean eglStatus;
#define MAX_EGL_DEVICES 4
    EGLint numDevices = 0;
    EGLDeviceEXT devices[MAX_EGL_DEVICES];
    eglStatus = eglQueryDevicesEXT(MAX_EGL_DEVICES, devices, &numDevices);
    if (eglStatus != EGL_TRUE) {
        printf("Error querying EGL devices\n");
        exit(EXIT_FAILURE);
    }

    EGLAttrib cudaIndex;
    int cuda_device;
    int egl_device_id = 0;
    for (egl_device_id = 0; egl_device_id < numDevices; egl_device_id++) {
        eglStatus = eglQueryDeviceAttribEXT(devices[egl_device_id],
                                            EGL_CUDA_DEVICE_NV, &cudaIndex);
        if (eglStatus == EGL_TRUE) {
            cuda_device = cudaIndex;  // We select first EGL-CUDA Capable device.
            printf("Found EGL-CUDA Capable device with CUDA Device id = %d\n",
                   (int) cudaIndex);
            break;
        }
    }

    if (numDevices == 0) {
        printf("No EGL devices found.. Waiving\n");
        eglStatus = EGL_FALSE;
        exit(1);
    }

    std::cout << "Creating display ...\n";
    EGLDisplay eglDpy = eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT,
                                                 (void *) devices[cuda_device], NULL);
    if (eglDpy == EGL_NO_DISPLAY) {
        printf("Could not get EGL display from device. \n");
        eglStatus = EGL_FALSE;
        exit(EXIT_FAILURE);
    }

    std::cout << "egl display: " << eglDpy << "\n";

    EGLint major = 0;
    EGLint minor = 0;

    eglInitialize(eglDpy, &major, &minor);

    EGLint streamAttrMailboxMode[] = {EGL_NONE};

    std::cout << "Creating stream ...\n";

    EGLStreamKHR eglstream =
        eglCreateStreamKHR(eglDpy, streamAttrMailboxMode);

    if (EGL_NO_STREAM_KHR == eglstream) {
        std::cout << "\t!! Unable to create EGL stream (eglError: " << eglGetError()
                  << ")";
        return EXIT_FAILURE;
    }

    std::cout << "egl stream: " << eglstream << "\n";

    EGLint streamState;
    eglQueryStreamKHR(eglDpy, eglstream, EGL_STREAM_STATE_KHR,
                      &streamState);

    EGLNativeFileDescriptorKHR fd = -1;

    if (EGL_STREAM_STATE_CREATED_KHR == streamState) {
        fd = eglGetStreamFileDescriptorKHR(eglDpy, eglstream);
    }


    if (fd <= 0) {
        std::cout << "\t!! Unable to eglGetStreamFileDescriptorKHR (eglError: " <<
                  eglGetError() << ")";
    }

    std::cout << "egl stream fd: " << fd << "\n";

    std::cout << "\nStart producer and enter fd value manually. Press [Enter] when "
                 "prompted by producer.\n";
    std::cin.get();

    return 0;
}

producer:

int main() {

    initEGLfunctions();

    std::cout << "Getting device ...\n";

    EGLBoolean eglStatus;
#define MAX_EGL_DEVICES 4
    EGLint numDevices = 0;
    EGLDeviceEXT devices[MAX_EGL_DEVICES];
    eglStatus = eglQueryDevicesEXT(MAX_EGL_DEVICES, devices, &numDevices);
    if (eglStatus != EGL_TRUE) {
        printf("Error querying EGL devices\n");
        exit(EXIT_FAILURE);
    }

    EGLAttrib cudaIndex;
    int cuda_device;
    int egl_device_id = 0;
    for (egl_device_id = 0; egl_device_id < numDevices; egl_device_id++) {
        eglStatus = eglQueryDeviceAttribEXT(devices[egl_device_id],
                                            EGL_CUDA_DEVICE_NV, &cudaIndex);
        if (eglStatus == EGL_TRUE) {
            cuda_device = cudaIndex;  // We select first EGL-CUDA Capable device.
            printf("Found EGL-CUDA Capable device with CUDA Device id = %d\n",
                   (int) cudaIndex);
            break;
        }
    }

    if (numDevices == 0) {
        printf("No EGL devices found.. Waiving\n");
        eglStatus = EGL_FALSE;
        exit(1);
    }

    std::cout << "Creating display ...\n";
    EGLDisplay eglDpy = eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT,
                                                 (void *) devices[cuda_device], NULL);
    if (eglDpy == EGL_NO_DISPLAY) {
        printf("Could not get EGL display from device. \n");
        eglStatus = EGL_FALSE;
        exit(EXIT_FAILURE);
    }

    std::cout << "egl display: " << eglDpy << "\n";

    EGLint major = 0;
    EGLint minor = 0;

    eglInitialize(eglDpy, &major, &minor);

    EGLint EGLfd;

    std::cin >> EGLfd;

    std::cout << "egl stream fd: " << EGLfd << "\n";

    EGLStreamKHR eglstream =
        eglCreateStreamFromFileDescriptorKHR(eglDpy, EGLfd);

    if (eglstream == nullptr) {
        std::cout << "Error: Invalid EGL Stream (" << std::hex << std::showbase <<
        eglGetError
        () << ")\n";
    }

    std::cout << "egl stream: " << eglstream << "\n";

    std::cin.get();

    return 0;
}

Ok, guess i figured it out - the exchange process mentioned in the khronos reference is actually not just an example, it seems to be the only possible way. We tried to send the fd via a more abstract, high level ipc before, which did not work either. The only solution which works for us right now is sending/receiving the fd via low level unix sockets, as stated in the extension specification.

Could someone from nvidia give as an idea of what the limiting factor is when exchanging the created file descriptor? For me it looks like a really hard constraint to be forced to use low level ipc in order to get a fd exchanged correctly.

Hi,
We are checking if we have verified this usecase on L4T releases. In general, we verify EGLStream producer and consumer in same process like the CUDA sample does.

hi,
We ave samples of using socketFD in

/usr/src/nvidia/graphics_demos/

You may build the samples and run

Consumer:
$ cd /usr/src/nvidia/graphics_demos/eglstreamcube/x11
$ ./eglstreamcube -socket /tmp/test &

Producers:
$ cd /usr/src/nvidia/graphics_demos/gears-basic/x11
$ ./gears -eglstreamsocket /tmp/test -1 &
$ cd /usr/src/nvidia/graphics_demos/ctree/x11
$ ./ctree -eglstreamsocket /tmp/test &

Is it good for you to use sockets?

1 Like

Thanks for the example, it is already working for us and we are totally ok with unix sockets. I was just curious why we are not able to send the fd via IP ipc - turns out it’s a restriction introduced by linux itself, since the kernel is performing some work if a known file descriptor is send via unix domain socket.

Thanks for your help. Since our question is answered, you can mark this problem as resolved.

I’ve been working on a program that requires access of an EGL stream across processes. I’m working on passing an EGLNativeFileDescriptor via unix domain socket but I’ve been having a problem getting the methods for generating the file descriptor to work.

I can see that the methods are declared in eglext.h yet including the file isn’t enough. I can declare an EGLNativeFileDescriptor, but trying to use the eglGetStreamFileDescriptor method doesn’t work, and the compiler complains that it was not declared. I can see it’s declaration nested between some #ifdef 's in eglext.h

If I add the following

#define EGL_EGLEXT_PROTOTYPES
#include <egl.h>
#include <eglext.h>

I can compile but I get a linker error for using eglGetStreamFileDescriptor and I can’t seem to find anything that would indicate what linker flag I might be missing and there is no documentation I can find on it.

So I took a look at the example /usr/src/nvidia/graphics_demos/eglstreamcube and saw this block of code which seems to be the secret for getting the compiler to compile and link:

// Extensions used by this demo
#define EXTENSION_LIST(T) \
    T( PFNEGLQUERYSTREAMKHRPROC,           eglQueryStreamKHR ) \
    T( PFNEGLQUERYSTREAMU64KHRPROC,        eglQueryStreamu64KHR ) \
    T( PFNEGLSTREAMCONSUMERACQUIREKHRPROC, eglStreamConsumerAcquireKHR ) \
    T( PFNEGLSTREAMCONSUMERRELEASEKHRPROC, eglStreamConsumerReleaseKHR ) \
    T( PFNEGLCREATESTREAMKHRPROC,          eglCreateStreamKHR ) \
    T( PFNEGLDESTROYSTREAMKHRPROC,         eglDestroyStreamKHR ) \
    T( PFNEGLSTREAMCONSUMERGLTEXTUREEXTERNALKHRPROC, \
                        eglStreamConsumerGLTextureExternalKHR ) \
    T( PFNEGLGETSTREAMFILEDESCRIPTORKHRPROC, \
                        eglGetStreamFileDescriptorKHR )

#define EXTLST_DECL(tx, x) static tx x = NULL;
#define EXTLST_ENTRY(tx, x) { (extlst_fnptr_t *)&x, #x },

EXTENSION_LIST(EXTLST_DECL)
typedef void (*extlst_fnptr_t)(void);
static const struct {
    extlst_fnptr_t *fnptr;
    char const *name;
} extensionList[] = { EXTENSION_LIST(EXTLST_ENTRY) };

I don’t really understand what is going on in here. Copying and pasting this block into the top of my source makes it compile and link but the program immediately terminates. What am I missing? If I need to do something like the large block of code above, can someone please explain to me what is going on? Thank you!

Hi benjamin.stoneking,

Please help to open a new topic for your issue. Thanks