Add nvvidconvert element in deepstream-app

Hi Guys,

I am trying to run deepstream-app on Jetson Nano. Could anyone guide me on how to use nvvidconvert plugin after tracker int this application ? I wish to convert the NV12 frames to RGBA. However, I am unable to locate where to add this element.

Thanks.

Hi,
Here is the architecture of deepstream app:
[url]https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_architecture.html[/url]

On Nano, it does not enable secondary classifiers. So yo would like to have the format be RGBA between nvtracker and nvmultistreamtiler?

Hi DaneLLL,

Thanks for the response. In the deepstream-app I as able to deduce that a video convert element is being used in the pipeline. However, I was not able to find the configuration for it. Assuming that the output is in RGBA format, is it possible to directly get access to the output buffer somehow. My ultimate goal which I have mentioned in other posts is to get access to the raw frame and convert to RGB/ RGBA format so that I can send it to the server.

Thanks.

Hi,
For accessing the buffers, could you enable dsexample?

deepstream_sdk_v4.0_jetson\sources\gst-plugins\gst-dsexample\README

It can be enabled by adding below in config file:

[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
unique-id=15
gpu-id=0

Hi DaneLLL,

I tried enabling ds-example using the above configuration. I added the library in Makefile as well. To test, I printed a log message in ds-example library. However, I have not been successful in printing the log messages. Please find my config file and Makefile below. Kindly point me out where I should place the config or if I am making a mistake. Is there a documentation on the usage of this library ?

Config file:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl


[tiled-display]
enable=0
rows=1
columns=1
width=640
height=480
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///home/edgetensor/deepstream_sdk_v4.0_jetson/samples/streams/stripes_10.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt

[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
unique-id=15
gpu-id=0


[tests]
file-loop=0

Makefile :

APP:= deepstream-app

#TARGET_DEVICE = $(shell gcc -dumpmachine | cut -f1 -d -)
TARGET_DEVICE = $(shell g++ -dumpmachine | cut -f1 -d -)

NVDS_VERSION:=4.0

LIB_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/lib/

ifeq ($(TARGET_DEVICE),aarch64)
  CFLAGS:= -DPLATFORM_TEGRA
endif

SRCS:= $(wildcard *.c)
SRCS+= $(wildcard ../../apps-common/src/*.c)

INCS:= $(wildcard *.h)

PKGS:= gstreamer-1.0 gstreamer-video-1.0 x11

OBJS:= $(SRCS:.c=.o)

  
CFLAGS+= -I../../apps-common/includes -I../../../includes -I/usr/src/nvidia/tegra_multimedia_api/include/ -I/usr/local/cuda-10.0/targets/aarch64-linux/include/  -I../../../gst-plugins/gst-dsexample -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=4

LIBS+= -L$(LIB_INSTALL_DIR) -L/usr/local/cuda-10.0/targets/aarch64-linux/lib  -L../../../gst-plugins/gst-dsexample -ljpeg  -lnppicc -lnvdsgst_meta -lnvdsgst_dsexample -lnvbufsurface -lnvds_meta -lnvdsgst_helper -lnvds_utils -lm -lcurl \
       -lgstrtspserver-1.0 -Wl,-rpath,$(LIB_INSTALL_DIR)

CFLAGS+= `pkg-config --cflags $(PKGS)`

LIBS+= `pkg-config --libs $(PKGS)`

all: $(APP)

%.o: %.c $(INCS) Makefile
	$(CC) -c -o $@ $(CFLAGS) $<

$(APP): $(OBJS) Makefile
	$(CC) -o $(APP) $(OBJS) $(LIBS)

clean:
	rm -rf $(OBJS) $(APP)

Edited code in ds-example:

static GstFlowReturn
get_converted_mat (GstDsExample * dsexample, NvBufSurface *input_buf, gint idx,
    NvOSD_RectParams * crop_rect_params, gdouble & ratio, gint input_width,
    gint input_height)
{
  printf("\ndsexample: Entered get_converted_mat\n");
  ...
}

Kindly help me out. I wish to access the buffer especially after RGBA conversion just like it is being done in get_converted_mat

Thanks.

Hi,

nvidia@nvidia-desktop:~/deepstream_sdk_v4.0_jetson/sources/gst-plugins/gst-dsexample$ sudo CUDA_VER=10.0 make install

Please run above command. And it is not required to modify Makefile of deepstream-app.

Please check deepstream_sdk_v4.0_jetson\sources\gst-plugins\gst-dsexample\README.

Hi DaneLLL,

Thanks a lot. I am able to use the plugin now. However, when I try to dump the output after RGBA to BGR conversion using imwrite function of OpenCV, the output does not seem to concur with the input. Please the code below. Additionally, I am attaching the input and output files for your reference.

static GstFlowReturn
get_converted_mat (GstDsExample * dsexample, NvBufSurface *input_buf, gint idx,
    NvOSD_RectParams * crop_rect_params, gdouble & ratio, gint input_width,
    gint input_height)
{
  printf("\ndsexample: Entered get_converted_mat\n");

  NvBufSurfTransform_Error err;
  NvBufSurfTransformConfigParams transform_config_params;
  NvBufSurfTransformParams transform_params;
  NvBufSurfTransformRect src_rect;
  NvBufSurfTransformRect dst_rect;
  NvBufSurface ip_surf;
  cv::Mat in_mat;
  ip_surf = *input_buf;

  ip_surf.numFilled = ip_surf.batchSize = 1;
  ip_surf.surfaceList = &(input_buf->surfaceList[idx]);

  gint src_left = GST_ROUND_UP_2(crop_rect_params->left);
  gint src_top = GST_ROUND_UP_2(crop_rect_params->top);
  gint src_width = GST_ROUND_DOWN_2(crop_rect_params->width);
  gint src_height = GST_ROUND_DOWN_2(crop_rect_params->height);

  // Maintain aspect ratio
  double hdest = dsexample->processing_width * src_height / (double) src_width;
  double wdest = dsexample->processing_height * src_width / (double) src_height;
  guint dest_width, dest_height;

  if (hdest <= dsexample->processing_height) {
    dest_width = dsexample->processing_width;
    dest_height = hdest;
  } else {
    dest_width = wdest;
    dest_height = dsexample->processing_height;
  }

  // Configure transform session parameters for the transformation
  transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
  transform_config_params.gpu_id = dsexample->gpu_id;
  transform_config_params.cuda_stream = dsexample->cuda_stream;

  // Set the transform session parameters for the conversions executed in this
  // thread.
  err = NvBufSurfTransformSetSessionParams (&transform_config_params);
  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("NvBufSurfTransformSetSessionParams failed with error %d", err), (NULL));
    goto error;
  }

  // Calculate scaling ratio while maintaining aspect ratio
  ratio = MIN (1.0 * dest_width/ src_width, 1.0 * dest_height / src_height);

  if ((crop_rect_params->width == 0) || (crop_rect_params->height == 0)) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("%s:crop_rect_params dimensions are zero",__func__), (NULL));
    goto error;
  }

#ifdef __aarch64__
  if (ratio <= 1.0 / 16 || ratio >= 16.0) {
    // Currently cannot scale by ratio > 16 or < 1/16 for Jetson
    goto error;
  }
#endif
  // Set the transform ROIs for source and destination
  src_rect = {(guint)src_top, (guint)src_left, (guint)src_width, (guint)src_height};
  dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};

  // Set the transform parameters
  transform_params.src_rect = &src_rect;
  transform_params.dst_rect = &dst_rect;
  transform_params.transform_flag =
    NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC |
      NVBUFSURF_TRANSFORM_CROP_DST;
  transform_params.transform_filter = NvBufSurfTransformInter_Default;

  //Memset the memory
  NvBufSurfaceMemSet (dsexample->inter_buf, 0, 0, 0);

  GST_DEBUG_OBJECT (dsexample, "Scaling and converting input buffer\n");

  // Transformation scaling+format conversion if any.
  err = NvBufSurfTransform (&ip_surf, dsexample->inter_buf, &transform_params);
  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("NvBufSurfTransform failed with error %d while converting buffer", err),
        (NULL));
    goto error;
  }
  // Map the buffer so that it can be accessed by CPU
  if (NvBufSurfaceMap (dsexample->inter_buf, 0, 0, NVBUF_MAP_READ) != 0){
    goto error;
  }

  // Cache the mapped data for CPU access
  NvBufSurfaceSyncForCpu (dsexample->inter_buf, 0, 0);

  // Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
  // algorithm can handle padded RGBA data.
  in_mat =
      cv::Mat (dsexample->processing_height, dsexample->processing_width,
      CV_8UC4, dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0],
      dsexample->inter_buf->surfaceList[0].pitch);

  cv::cvtColor (in_mat, *dsexample->cvmat, CV_RGBA2BGR);
  cv::imwrite("test.jpg",*dsexample->cvmat);

  if (NvBufSurfaceUnMap (dsexample->inter_buf, 0, 0)){
    goto error;
  }

#ifdef __aarch64__
  // To use the converted buffer in CUDA, create an EGLImage and then use
  // CUDA-EGL interop APIs
  if (USE_EGLIMAGE) {
    if (NvBufSurfaceMapEglImage (dsexample->inter_buf, 0) !=0 ) {
      goto error;
    }

    // dsexample->inter_buf->surfaceList[0].mappedAddr.eglImage
    // Use interop APIs cuGraphicsEGLRegisterImage and
    // cuGraphicsResourceGetMappedEglFrame to access the buffer in CUDA

    // Destroy the EGLImage
    NvBufSurfaceUnMapEglImage (dsexample->inter_buf, 0);
  }
#endif

  /* We will first convert only the Region of Interest (the entire frame or the
   * object bounding box) to RGB and then scale the converted RGB frame to
   * processing resolution. */
  return GST_FLOW_OK;

error:
  return GST_FLOW_ERROR;
}

Kindly help me out.

Thanks.

debug.zip (7.7 KB)

Let’s continue the discussion in RTSP camera access frame issue - DeepStream SDK - NVIDIA Developer Forums