LibArgus performance difference between JetPack 3.3 (L4T 28.2.1) and JetPack 4.2 (L4T 32.1)


I am developing video processing pipelines on Jetson-TX2 using native libArgus provided by nVidia JetPack and I have a few questions regarding libArgus performance.
I am hoping someone can provide some insight regarding these matter.

  1. One of the performance evaluation is to produce 4 streams from a single 4K camera source (using IMX 274 through a Leopard Imaging 3 port adapter board as a test).
    The 4 (consumer) stream settings are:
  • 4K src => LibArgus (downscaled to 1080p, YUV420M) => H.264 encoded
  • 4K src => LibArgus(downscaled to 720p, YUV420M) => H.264 encoded
  • 4K src => LibArgus(cropped to 1080p, YUV420M) => H.264 encoded
  • 4K src => LibArgus(cropped to 1080p and downscaled to 720p, YUV420M) => H.264 encoded
    I am assuming the downscale and color space conversion is performed in the TX2 ISP feeding the camera core.
  • Using JetPack 3.3 (L4T 28.2.1), I was able to achieve 30 fps at the LibArgus output and also at the H.264 encoder capture plane, so the aggregate performance is 120 fps combined given the stream settings.
    Using JetPack 4.2 (L4T 32.1) I was NOT able to achieve the same performance. I can only reach about aggregate 99 fps of all 4 streams combined at libArgus output.
    Was this reduction in performance due to ISP software change or due to some other reason?
    Is there any means I can try to increase performance using JetPack 4.2?
    The frame output of libArgus is copied into DMA buffer using IImageNativeBuffer interface in both cases.
  1. I have also noticed that the following .so are present in both Jetpack 3.3 and JetPack 4.2:
    a. JetPack 3.3:,, and
    b. jetpack 4.2:,,
    c. when using the (typical for gstreamer plugins and non-native applications) will invoke nvargus-daemon (as a server?)
  • In JetPack 3.3, the default .so to link for native libArgus application is and everything is working well.
  • In JetPack 4.2, the default .so to link for native libArgus application has changed to socketclient version instead of the native, and this incurs additional cpu loading and fps performance reduction in video processing experipements.
    When I switch to link for my experiment, the nvargus-daemon is no longer invoked but it seems to have more apparent memory leak, is this to be expected?
    I like to understand the primary reason behind switching from using the native so ( to the client version. Is this primarily to address/reduce the memory leak issue?
  1. JetPack 4.2 release note indicates the acknowledgement of memory leak issue,
  • Is there an ETA (such as next release) or a development path planned to address this?
  • Is this memory leak issue in addition to the one mentioned in L4T 28.3 or is it the same one?

Best regards,

Could you boost to performance mode to check if still have performance gap.

Hi, ShaneCCC,

I believe I am already using the maximum setting available.
Both JetPack 3.3 and Jetpack 4.2 test are performed under the same condition as below.

NV Power Mode: MAXN

SOC family:tegra186 Machine:quill
Online CPUs: 0-5
CPU Cluster Switching: Disabled
cpu0: Gonvernor=performance MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu1: Gonvernor=performance MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu2: Gonvernor=performance MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu3: Gonvernor=performance MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu4: Gonvernor=performance MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu5: Gonvernor=performance MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
GPU MinFreq=1300500000 MaxFreq=1300500000 CurrentFreq=1300500000
EMC MinFreq=40800000 MaxFreq=1866000000 CurrentFreq=1866000000 FreqOverride=1
Fan: speed=255

Best Regards,

Hi @jying,

I would be curious if you had any additional findings and whether the situation has improved in the latest Jetpack?

Cheers & thanks!