How to create a gstreamer plugin to extract the ROI

Hi,
How to use ExtractFdFromNvBuffer and get_converted_mat to get frames?
How to leverage ${ds root}/sources/gst-plugins/gst-dsexample/gstdsexample.cpp as a plugin?

Env. Jetpack4.1.1, DeepStream3.0 and Xavier

1 Like

Hi,
You can use APIs in nvbuf_utils.h. FYR, some samples are at
https://devtalk.nvidia.com/default/topic/1045453/deepstream-sdk-on-jetson/how-to-draw-lanes-on-the-video/post/5305297/#5305297
https://devtalk.nvidia.com/default/topic/1045453/deepstream-sdk-on-jetson/how-to-draw-lanes-on-the-video/post/5315160/#5315160

Hi DaneLLL,

Does this sample code can be used in deepstream-app.c or deepstream-test1.c?
https://devtalk.nvidia.com/default/topic/1045453/deepstream-sdk-on-jetson/how-to-draw-lanes-on-the-video/post/5305297/#5305297
How to get fd in NvBufferGetParams?
each frame can be get by same fd?

Thanks for your help.

Hi,
You can enable dsexample in deepstream-test by following

deepstream_sdk_on_jetson\sources\gst-plugins\gst-dsexample\README

For using deepstream-test1, yo need to modofy the code to link dsexample plugin:

... ! nvinfer ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! <b>dsexample</b> ! nvosd ! ...

@DaneLLL

I got following error messages:

~/deepstream_sdk_on_jetson/sources/apps/sample_apps/deepstream-test1$ make
cc -o deepstream-test1-app deepstream_test1_app.o `pkg-config --libs gstreamer-1.0`
deepstream_test1_app.o: In function `main':
deepstream_test1_app.c:(.text+0x9d8): undefined reference to `ExtractFdFromNvBuffer'
deepstream_test1_app.c:(.text+0x9e4): undefined reference to `NvBufferGetParams'
collect2: error: ld returned 1 exit status
Makefile:34: recipe for target 'deepstream-test1-app' failed
make: *** [deepstream-test1-app] Error 1

What steps I need to do?

While I have modified following files:

  1. According to gst-dsexample\README: Append [ds-example] session to dstest1_pgie_config.txt
  2. Add #include “/home/nvidia/deepstream_sdk_on_jetson/sources/includes/nvbuf_utils.h” to deepstream_test1_app.c
  3. Insert follow code to the main() of deepstream_test1_app.c in under-line.
...
  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvidconv", "nvvideo-converter");

 [u]/* Create dsexample */
  dsexample = gst_element_factory_make ("dsexample", "gstdsexample");[/u]
...
  /* Set up the pipeline */
  /* we add all elements into the pipeline */
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, pgie,
      filter1, nvvidconv, filter2, <u>dsexample</u>, nvosd, sink, NULL);
...
  /* file-source -> h264-parser -> nvh264-decoder ->
   * nvinfer -> filter1 -> nvvidconv -> filter2 -> dsexample -> nvosd -> video-renderer */
  if (!gst_element_link_many (source, h264parser, decoder, pgie,
      filter1, nvvidconv, filter2, <u>dsexample</u>, nvosd, sink, NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }
...
  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");

        [u]g_signal_emit_by_name (sink, "pull-sample", &sample,NULL);
        caps = gst_sample_get_caps (sample);
        if (!caps)
        {
            printf("could not get snapshot format\n");
        }
        gst_caps_get_structure (caps, 0);
        buffer = gst_sample_get_buffer (sample);
        gst_buffer_map (buffer, &map, GST_MAP_READ);

        ExtractFdFromNvBuffer((void *)map.data, &dmabuf_fd);

       ret = NvBufferGetParams(dmabuf_fd, &parm);
       if (ret != -0) {
           printf ("**** error NvBufferGetParams()\n");
       }[/u]
...

Hi,
Not sure but you should put the code of ExtractFdFromNvBuffer() in gst_dsexample_transform_ip() and rebuild libgstnvdsexample.so. Not in deepstream_test1_app.c.

Modification in deepstream_test1_app.c is to link dsexample.

@DaneLLL

If our pipeline is file-source -> h264-parser -> nvh264-decoder ->
nvinfer -> filter1 -> nvvidconv -> filter2 -> dsexample -> nvosd -> video-renderer

Do you think this pipeline can be divided two parts? (the first part is processed by Xavier then forwarding the remained stream data and meta-data to another server for last part processing).

If it can, where is the best break point on pipeline?

Hi thhsiao,
For uploading data to another server, you may refer to deepstream-test4 sample. Please follow below steps to run it:
a. Set up a server on a Ubuntu PC
1 On the Ubuntu PC, do

$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application.git

2 Go to analytics_server_docker

$ cd deepstream_360_d_smart_parking_application/analytics_server_docker

3 Install Docker and Docker Compose(Dependencies is described in README.md)

### Dependencies

The application requires recent versions of [Docker](https://docs.docker.com/install/linux/docker-ce/ubuntu/) and [Docker Compose](https://docs.docker.com/compose/install/#install-compose) to be installed in the machine.

4 Edit start.sh to fill in IP_ADDRESS to IP address of the Ubuntu PC and GOOGLE_MAP_API_KEY. By default you may not have the google map key, so simply edit it to

export GOOGLE_MAP_API_KEY=

5 Run start.sh. This step takes a while and it starts the server at last.

$ ./start.sh

b. On your Xavier, build and run deepstream-test4
1 Follow NVIDIA_DeepStream_SDK_on_Jetson_References to install software prerequisites and DeepStream SDK
2 Follow README at

deepstream_sdk_on_jetson\sources\apps\sample_apps\deepstream-test4\README

3 Please note that you have to modify CONNECT_STRING in deepstream_test4_app.cpp before building it. Replace 10.24.238.58 with IP address of the Ubuntu PC.

#define CONNECTION_STRING "10.24.238.58;9092;metromind-start"