Extracting Frames by Using Probe in Graph Composer

•Hardware Platform (Jetson / GPU): Nvidia RTX A5500
•DeepStream Image: 8.0-gc-triton-devel
•NVIDIA GPU Driver Version (valid for GPU only): 580.95.05
•Issue Type( questions, new requirements, bugs): bugs

I am currently using Graph Composer 5.1.0 in Deepstream 8.0-gc-triton-devel container. I have attached a Probe Connector to PyCodelet component as shown in the screenshot. My question is how can I receive the frames that I have extracted from codelet.py. Is there any sample script for that because provided only sample is using receiver which I am not able to access data as well.

Do you mean you want to transfer the frames out of the graph?

Exactly.
I wanna stream frames via webrtc in the codelet

Do you mean you want to implement a webrtc extension?

Yes, but I also want to know how I can implement a codelet.py file to process frames exclusively from the pipeline.
To be more spesific, “How can I implement probe to use data from a runing pipeline in Graph Composer”. For instance, in the deepstream-test1 app from deepstream_python_apps repo, there is a function calledosd_sink_pad_buffer_probe. This function would be a great case to implement.

The DeepStream extension library and header files are not available for you to develop DeepStream related extension.

It is not supported.

Okay, I understand. Then, am I able to access any data that flows through the pipeline using my codelet.py file?

What do you mean?

Since Python codelets are not included in the DeepStream 8.0 documentation, I am referencing some examples from the DeepStream 6.3 documentation. In that documentation, the ‘Python Codelet’ section says:

Python Codelets allow users to build parts of their application in Python. This also allows users to add custom implementation without creating a custom Extension.

According to this sentence I am able to run a script. Additionally in the same documentation ‘Accessing other Components’ section states that accessing components possible via python codelets.

Users can also access other components such as transmitter and receiver as follows:

So considering these references I am suppose to run a script file and extract data (could be anything from NvDsBatchMeta structure) from a specified point in the pipeline.

@Fiona.Chen do you need further information?

There is no necessary library and header files available for you to build such a project. If you want to access and extract the frame data, the related libraries and header files are needed.

Is there a way to apply a custom image processing script in Graph Composer?

For instance, can I apply a mask to each frame at a specified point in the designed pipeline by using a customly created script file?
If it is not possible, the only way to apply use frames in a customly created script file to send frames via a protocol to another pipeline?

What do you mean by " apply a mask to each frame at a specified point in the designed pipeline"?

It does not have to be masking, it should be resizing, cropping etc.
By masking I mean fundamental image masking such as gray scale (color to gray scale)
Hovewer, main motivation is to prove any image processing method by using a pyrhon script file is applicable in Graph Composer.

Why do you need to do such image processing?

It is not applicable with current Graph Composer now.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.