Tx2 eth port can’t handle high incoming traffic: GigE stream issue

I am using GigE camera with my jetson Tx2. The camera is connected via the eth port. The eth port is configured for 1000mb/s. The video stream format is rawbayer at 14fps.

I have observed frame losses for framerates above 12fps This is because the RX ethernet buffers are not able to handle this incoming traffic on time. The following network parameters are set at maximum. It has improved the capacity but not yet to the required level.


I don’t know if it is important but I could see that denver processor is in cpu-idle mode. Is that someway connected to the buffer loss issue?

How can I achieve the full capacity and avoid RX losses?

I‘m using Jetpack4.2.1 based yocto image for flashing


I was investigating this issue, I had set different nvp modes and ran the camera again. Following are the observations

  1. The camera was running with nvp mode 1 and mode 3 without packet losses.
  2. Remaining modes were failed to handle incoming data from the Camera.
  3. For mode 1 and 3 denver cpu were inactive. My conclusion is that there are RX buffer losses when both denver and a57 cpus are activated(mode0,2)

hello anishmonachan7,

it’s surprise that you found this issue related to Denver cores enabled,
may I know who’s your sensor vendor,
please also refer to Jetson Partner Supported Cameras, did you working with cameras supported by Jetson Camera Partners on the Jetson platform.

Hi JerryChang,

Thanks for your reply.

I’m using ids ueye camera sensors, This is not in the list of supported cameras by nvidia.

I have already been using these sensors with older versions of jetpack. Say jetpack 3.2.1. I had no buffer losses with this jetpack version.

Now I’m testing on the new jetpack Linux4Tegra R32.2, JetPack 4.2.1.

Hello JerryChang,
I’ve been waiting to receive some updates on this issue.
Have you been able to have a look again?

Hi anishmonachan7,

please see this post


Hey Bibek,

Thanks for your reply. I have followed all the methods given by the posts.
I’ve set processor affinity. I have even tried to pin the task to single processor and threads.
I don’t think this solution would solve my Rx buffer loss issue.

A single processor or thread might not be able to handle such a gigabit incoming traffic without core switching/sharing. The incoming traffic handling went even worse when I pinned the task into denver cores and the performance stayed as the same when I pinned the task into a57 cores. When I set affinity to all cpus, the issue rises.

The issue in my case is very clear, my task requires both denver and a57, 6 cores to handle the incoming traffic to the eth interface. There is Rx buffer loss when it activates the denver cores.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

how have you concluded that you need 6 cores and not 4 or 8?
How about increasing the payload size and reducing the interrupt count?