Detected images are blurred even at 25FPS

@jasonpgf2a , processing width and processing height is same as camera resolution, also my scale ratio in gstdsexample.cpp is 1 .

   dest_width = src_width;
   dest_height = src_height;

Transformation params:

   transform_params.transform_flag =
     NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC |
       NVBUFSURF_TRANSFORM_CROP_DST;
   transform_params.transform_filter = NvBufSurfTransformInter_Default; 

Not sure if I have guassian blur turned off or on , where can I find that specific option ?

[ds-example]
enable=1
processing-width=1920
processing-height=1080
full-frame=0
unique-id=15
gpu-id=0

I’m not sure - maybe the detail is in here: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_quick_start.html#

I just remember when I was toying with the dsexample code for my own purposes (motion detection) that it scales the frame when creating the cv::Mat and in some circumstances calls guassian blur.

Check the source for the detail - its only a small file. ;-)

Yes already checked that file cant find anything related to guassian blur , also scale is set to 1 so there is no scaling issue either. Let me go line by line and try to figure this out , thank you for your response

I’m looking on DS5.0dp - maybe the guassian blur is only in that version.

Yes , I will try to use DS5.0 code instead of old version and see if it works

This is how I am trying to save detected object :

  cv::Mat in_mat, out_mat;
  gint src_left = GST_ROUND_UP_2((unsigned int)crop_rect_params->left);
  gint src_top = GST_ROUND_UP_2((unsigned int)crop_rect_params->top);
  gint src_width = GST_ROUND_DOWN_2((unsigned int)crop_rect_params->width);
  gint src_height = GST_ROUND_DOWN_2((unsigned int)crop_rect_params->height);

  dest_width = src_width;
  dest_height = src_height;
in_mat =
      cv::Mat (dest_height, dest_width,
      CV_8UC4, dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0],
      dsexample->inter_buf->surfaceList[0].pitch);

#if (CV_MAJOR_VERSION >= 4)
  cv::cvtColor (in_mat, out_mat, cv::COLOR_RGBA2BGR);
#else
  cv::cvtColor (in_mat, out_mat, CV_RGBA2BGR);
#endif
  out_mat = cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);

  cv::cvtColor (in_mat, out_mat, CV_RGBA2BGR);

Still unable to resolve , any pointers ?

Hopefully someone from nvidia can take a look as I’ve run out of ideas… Last thing would be to test your config with full_frame=1 and see what happens. When full-frame is 0 I thought it just cropped out the image inside the bounding box.

A few months back when DS5.0dp first came out I was creating a motion detection using dsexample as a base and when I dumped images to full they were full resolutions so I’m not sure whats wrong with your test.

Hopefully someone from nvidia can take a look for you…

yes valid point , may be I can try with full frame and crop data separately and see how it looks

Yeah I think what your doing at the moment is cropping out the bounding box and then blowing it up to 1920x1080.

I checked if that is the case , but its not , my detection boundary aspect ratio is maintained in final image as well

gint src_left = GST_ROUND_UP_2((unsigned int)crop_rect_params->left);
  gint src_top = GST_ROUND_UP_2((unsigned int)crop_rect_params->top);
  gint src_width = GST_ROUND_DOWN_2((unsigned int)crop_rect_params->width);
  gint src_height = GST_ROUND_DOWN_2((unsigned int)crop_rect_params->height);

  /* Maintain aspect ratio */
  double hdest = dsexample->processing_width * src_height / (double) src_width;
  double wdest = dsexample->processing_height * src_width / (double) src_height;
  guint dest_width, dest_height,top,left;
  dest_width = src_width;
  dest_height = src_height;

Turns out saving entire frame and then cropping is bad for run time and performance especially when you have 10 streams running in parallel , although I tried that one and still poor results

@kayccc any pointers on this ?

Is there anyone from Nvidia who can help me here?

@DaneLLL Any pointers?

Hi
There is a sample of saving to disk. FYR.

You can try to launch RTSP server through test-mp4. See if the issue is specific to your source. An example of executing test-mp4:

Used exact same reference , its not like everytime its blurred but 3 out of 5 times image is really blurred .

Hi,
It is more like an issue that the h264 stream is not received completely and certain macro blocks cannot be correctly decoded. Suggest you save the h264 stream to a file:

rtspsrc ! rtph264depay ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=dump.h264

And check validness of the steam via ffmpeg or JM decoder. You can also try deepstream-test1.

We don’t have similar issues reported on DeepStream SDK 5.0 DP, but this actually is not a stable release. You may wait for DeepStream SDK 5.0 GA.

1 Like

How does one check the validness of the stream? What are the commands?

Hi,
Here is an example of using JM decoder:


You can download it from http://iphome.hhi.de/suehring/