How to display real-time inference video on GUI(flet,wxpython) with Python

I want to display the inference video that is visible in the Deepstream sample code(test1-usbcam) on a GUI(flet,wxpython), but it’s not working well.

Currently, I am converting the frame information of the camera video being inferred into CPU memory format, converting it to BGR, etc., with a capsule filter, and finally passing it outside through appsink to convert it to numpy and display it.

However, something in the conversion process is not working properly, resulting in images that are bright green or gray.

If you have any good suggestions, I would appreciate your advice.
Thank you very much.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU) → Jetson Orin Nano
• DeepStream Version → 6.4
• JetPack Version (valid for Jetson only) → 6.0DP
• TensorRT Version →
• Issue Type → questions

How did you do that? Can you show the pipeline?

I solved the problem by lowering the Deepstream version.
The pipeline is as below.


        print("Linking elements in the Pipeline \n")  

        sinkpad = streammux.get_request_pad("sink_0")
        if not sinkpad:
            sys.stderr.write(" Unable to get the sink pad of streammux \n")
        srcpad = caps_vidconvsrc.get_static_pad("src")
        if not srcpad:
            sys.stderr.write(" Unable to get source pad of caps_vidconvsrc \n")