Multiple GigE cameras

Hello. I have a task to use 3 GigE cameras (they are suppose to do about 7 pictures per second - not continous video stream!) with Jetson TX1. Have anyone tried that? I know that TX1 has only one Giga Ethernet port. Should I use some switch? I’ve also read that TK1 has Dual Ethernet mini-PCI card, so there can be up to 3 Giga Ethernet ports. Is there such option in TX1 too? Or maybe this idea is so riddiculous and should be abandoned?

Thanks for help,

Hopefully someone with multiple gigabit cameras might also answer, these are just thoughts on the topic and other carrier boards.

I couldn’t give a definitive answer, but basically an ethernet switch shares bandwidth and so if the cameras actually need gigabit, then a switch would not work (each camera would be fighting with the other if they operate at the same time). You’d need three ports where each port has its own controller. The mini-PCIe slot of the TK1 has only a single lane, and I doubt you’ll find any mini-PCIe NICs with two or more ports and independent controllers (and the PCIe revision would have to be at least revision 2, and most of the mini-PCIe stuff is only revision 1).

The TX1 and TX2 have four lanes for PCIe on the development board. The carrier you choose (dev. kit is just one) may change what PCIe is available. Perhaps someone familiar with different carriers would comment.

PCIe revision 1 is basically a practical throughput of around 2 gigabit per lane, and so three cameras on gen. 1 (rev. 1 == gen. 1 == v1) would not work. Also, NICs with three ports and actual controllers on each port (versus being a switch) mostly don’t exist (mostly you get port counts of one, two, or four).

You might use the built in gigabit for one camera port, along with a two port NIC (so long as it has independent controllers), but you’d have to either guarantee the NIC works at rev. 2 speeds (and a rev. 2 NIC can throttle back to rev. 1 if it doesn’t like signal quality), or get one with at least two lanes. Two lanes, even at rev. 1 speeds, will always provide around four gigabit/sec…thus any two-lane or more PCIe two port or four port NIC could work if designed with independent controllers. A rev. 2 lane has a practical max throughput of about 4Gb/s, so a four port NIC would require either two lanes or a single guaranteed rev. 2 PCIe lane. A PCIe x4 four lane would be good for low latency on a four port NIC since each port would have its own lane and would not come even close to saturating the PCIe side’s bandwidth.

Not all NICs work well on Jetson’s arm64 architecture. I wouldn’t pick one up unless you see it works with native Linux drivers using the current kernel versions. Drivers available from a manufacturer typically are only for a desktop PC (those which are simply a kernel config and build typically work with any architecture). Some newer NICs may need drivers not available with current kernel versions.

PCIe NICs can consume significant power, make sure you can live with the power consumption, physical size, and weight. The dev. carrier board does not have any kind of chassis, so you’d need to guarantee rigid mounting (especially on a moving vehicle which is bouncing around). Each NIC might have more limitations on temperature, so if this is outdoors consider that.

The driver which is used with the integrated NIC is Realtek. If you get a NIC using that same driver, then you won’t have to worry about drivers. I am not recommending this NIC, but if I were to experiment, then I might start with this:

This is a lower cost NIC using gen. 2, but it doesn’t say if it is two lane or one lane, and doesn’t say if it has independent controllers for each port or not. Mostly I suspect this has independent controllers on each port just for the reason that otherwise it’d be nothing more than a single NIC with a switch…if you could research a NIC such as this and find out that each port has its own gigabit controller which won’t compete with the next port, then this NIC might be a good starting point (perhaps the lower cost is by not having independent controllers). Since this NIC is gen. 2 capable, it might be single lane…which would be good enough provided it operates at gen. 2 (note “capable” is not the same as “will always run at gen. 2 speeds”)…if it turns out that it is two lane or more, then it wouldn’t matter if it clocks back to gen. 1 (though gen. 2 would still be better performance and latency).

PS. I have a problem with connecting one camera to Jetson. When I plug it to Ethernet port, no connection is shown and VimbaViewer says there is a “transportation layer error”. When I run VimbaViewer as “sudo -E” it detect camera, but I can run only CONFIG MODE. I tried to change MTU for eth0 and probably broke something. Tested on normal internet access (from router via ethernet) and no information about connection is shown (however I can open web). I added:

auto eth0
iface eth0 inet static
mtu 7750

using “sudo gedit /etc/network/interfaces”. Previously this file was empty, only included interfaces.d (which also is empty). I got some errors about gedit metadata, but file was saved. Not when I type “sudo ifconfig eth0”, inet address is, but no camera detected on this IP.

EDIT: I somehow managed to fix it. Now I can open camera stream, but it has rly low fps (9 frames, while it should be 30). jetson_clocks was launched. I also used:

sudo sysctl -w net.core.rmem_max=33554432
sudo sysctl -w net.core.wmem_max=33554432
sudo sysctl -w net.core.rmem_default=33554432
sudo sysctl -w net.core.wmem_default=33554432

Should it add some lines to sysctl.conf file? If yes - something went wrong. I set MTU to 7750 (as mentioned in:

I have no knowledge of your camera, and much of this depends on the camera. To see your network setup, run “ifconfig” (did your router set the interface to the correct subnet?). Probably “eth0” is the one you are interested in, unless you have a second network card (WiFi can count…if you are using WiFi all bets are off since this can destroy performance). If you have multiple cameras, then you need to be sure each is set up correctly to not interfere with the other. I have no idea if your cameras have static IP addresses, if they use DHCP, so on.

Low framerate is probably less related to ethernet throughput than it is to other issues. Granted, if you have a certain framerate from a GigE camera, and adding a second framerate on the same GigE network, then data throughput would suffer a lot. On the other hand, the first GigE camera (without a second camera) should be expected to run at its full speed (what is the max you get from a single camera?). Jumbo frame can help, but it depends on whether the camera itself supports it, and also on a number of other details. Jumbo frames won’t help if there are too many cameras for the interface or if the cameras don’t support it.

Before running any kind of test on performance you probably should make sure you are running in max performance mode:

sudo /home/ubuntu/

I was testing on jetson_clocks running (fan was spinning at max. speed). Everything I did, I did on a single camera. Got about 9 fps (while from manufacturer website I understood it should be 30 fps). Can ethernet cable cat 5E be a problem?

Cat 5E is standard for gigabit (I think up to 100 meters…not sure). I doubt ethernet was responsible for the performance issues being that severe.

“ifconfig” would still be useful, especially if all cameras are attached. It would be good to see if there are errors, overruns, collisions, or dropped packets. Some configuration issues (which are not hardware issues) could conceivably drop performance dramatically.

Sorry for late reply. I had to copy-paste you ifconfig results, but I managed to work it on my own. For posterity: add sysctl to config file and attach usb mouse - it works, no idea why. Probably something about clocks. Thanks for your help @linuxdev