Can deepstream handle higher resolutions than 1080p?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5GA

• Issue Type( questions, new requirements, bugs) QUESTION
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have a deepstream app that uses smart record and works really well for resolutions at 1080p with 4 streams (haven’t tested higher) at 30fps.

If I try to use higher resolutions in my cameras but keep the streammux element set to 1280*1080 I find that I often get glitchy recorded videos. Playback can be a bit jumpy and not smooth.

Could this be because its taking too long to resize each frame going into the streammuxer?
Since smart record is saving buffers before decoding I would have thought if I wanted higher resolutions I coudl just increase the camer resolution by leave the streammuxer at 1280x1080 so that the rest of the pipline worked on the smaller frames.

Hi @jasonpgf2a,
Is it still 4 streams with higher resolution camera? How about the fps / stream of the higher resolution camera?
How about if you set the resolution of streammux to be the same as the resolution of the camera, even its resolution is higher. I think this can save one scale from higher camera resolution to 1280 * 1080, but will consume more memory for the cached frames.


I’ve tried a few combinations. I have 4 cameras but typically only use 3.

So on 1080p and 30fps with the strammuxer also set to 1080p its fine. Nice smooth videos are created with smart record although I do get a slight pause on most of them at around 2 seconds into the clip. I have no idea what that is…

Anyway - when I goto higher resolutions, such as 2560x1440 but keep the streammux at 1080p but drop the fps down to 15 I find that recorded clips are a bit jumpy at the start but then smooth out after a couple of seconds.

Have tried matching the streammux values to 2560x1440 as well and it doesn’t make a noticeable difference.

Am now trying increasing the cameras to full resolution at 3072x2048 @ 15fps but leaving the streammuxer at 1080p and it seems to be the same… Recorded video look a little jumpy as if frames are being dropped though I’m not seeing any messages about it on the terminal. Is there a way I can send sample videos so you can see what I mean?

I have the pgie interval set to 4 for all these tests.

Could it be that the encoders can’t keep up? I see the jetson nano specs show this for the encoder:

500 MP/sec
1x 4K @ 60 (HEVC)
2x 4K @ 30 (HEVC)
4x 1080p @ 60 (HEVC)
8x 1080p @ 30 (HEVC)

So if I’m testing at 3072x2048 this is just under 4K (3840x2160) resolution so it probably can’t handle more than 2 streams.

I’m testing at 15fps which I thought would help. H.265.

The issues doesn’t seem to be the resnet10 model as it doesn’t look like the gpu utilisation is high at all (interval set to 4).

Maybe it’s a file writing issue from smart record.
I typically see a slight pause (frame drop) in recorded videos regardless of the resolution at approximate the smart record “startTime”.

With 3072x2048, is the total pixel number over 500 MP/sec?

The issues doesn’t seem to be the resnet10 model as it doesn’t look like the gpu utilisation is high at all (interval set to 4).

Agree with you, since it works before increasing camera resolution.

Maybe it’s a file writing issue from smart record.

Did you try changing the output as fakesink or dump the encoded video into RAM (/run/ folder)?

With higher resolution camera, if there is not samrt recording, can you still observe the issue?

I already write the files to a ram disk so that’s probably not the slow spot.

Could you share the GStreamer Pipeline Graph ?


Have attached a zipped svg file showing the pipeline graph with smart record elements added prior to the decoder. (24.2 KB)

Hi @jasonpgf2a,
Sorry for delay!

Checked the pipeline, I have two questions:

  1. as mentioned previously, this nvvideoconvert is not needed

2, looks the output resolution of nvstreammux is still 4K as below, is this expected?

Thanks for the info on the unnecessary converter. Though isnt it the case that it just acts like a no-op if no conversion is necessary such that it would not slow the pipeline in any way?
I will remove it in any case.

Should I remove the queue before the converter as well?

For the streammux I have tested at the camera resolution (as shown in the pipeline graph) and also at 1920x1080. There is not much of a difference. I would say its very slightly better when the streammux matches the camera resolution.

Hi @mchi I have removed the converter and caps elements as suggested (since the capabilities were the same before and after) and have left the preceding queue in place (which is just after the decoder).

I’ve recorded a few test videos and the first 4 were perfect but then I see the pause again at the 3 second mark into the 5th recorded video. So I don’t think having the converter in there makes any difference.

Hi @jasonpgf2a,

I have removed the converter and caps elements as suggested


I’ve recorded a few test videos and the first 4 were perfect but then I see the pause again at the 3 second mark into the 5th recorded video

Could you increase ‘buffer-pool-size’ of nvstreammux and check again? Currently, it’s 4 by default.
And, could you share the tegrastats log which can be captured by below command?

$ sudo tegrastats

before.txt (11.9 KB)

I already have the streammux buffer-pool-size property set to 16 as I saw that the deepstream-app code did this. Shall I try it higher/lower?

Tegrastats log attached. During this run I walked in front of the camera to trigger a detection which starts smart record. Here is a link to the file I’ve saved on dropbox - you can see the slight pause about 2 seconds into playing:

the recorded video is the output of the smart recording path, right?

Did you try some experiments, like increasing the inference interval, disable tracker, disabling DsAnalytics, and so on?


@mchi Correct. I’m using smart record to generate the video files (mp4 container).

When there is a detection (as determined in metadata probe) I start smart record (and flag that smart record is active so that it doesn’t try and start new smart records while one is already recording).

I’ll test disabling dsanalystics and the tracker and see if it makes any difference. I can’t see how it can since these are in a different branch of the tee from smart record.

Hi @jasonpgf2a,
May I know if you have any finding?


And, could also refer to DeepStream SDK FAQ to measure the latency of each component to find out the bottleneck of the pipeline.