Jetson Nano k8s cluster with deepstream-l4t app pod, success!

I wanted to share a success story with a 3-way Jetson Nano k8s cluster running the deepstream-l4t image with a pod deployment that consumes four Ubiquiti g3-flex cam rtsp streams and outputs a tiled rtsp stream after object detection and tracking.

kubectl commands and vlc screenshot:
DIN rail organization v2:

This is running k8s 1.18.1 (latest) with metallb for load balancing, flannel for networking (i gave up on calico after a day of headaches), and i’m using the image since it has deepstream-app and was easy to get working as a hello-world kind of thing.

Jetson Nanos do spontaneously reboot presumably from heat as turning up a box fan next to them seems to help. The heatsink should probably come with a fan by default. K8s doesn’t really detect that as a NotReady condition on the node without tuning (TODO) though on AWS or other cloud platforms, it detects such a condition in seconds. I’m using Readiness and Liveness probes to try to reschedule the pod when this happens.

I have to say Metropolis and Deepstream are amazing. Wish I could buy some newer carrier boards with the dual camera interfaces, those were released a couple weeks after I bought this hardware.

Anyway, ask me anything! :)


What settings are used for camera sources? Are you using a deepstream test? I use G3 flex as well but can’t get them to behave perfectly with DeepStream. The stream has stuttering/sync issues no matter the properties I set on various elements. Are you connecting directly to them or through the NVR’s rebroadcasted streams? In my case I’m using the rebroadcasted streams since I want the NVR to record no matter what.

Please also see the list of Jetson partner-supported cameras for a broad portfolio of cameras.

I’m using the 640x360 RTSP streams from the Ubiquiti Cloudkey Gen2 NVR app. I am seeing some tearing and stuttering, typically when there’s an object detected (or it’s at least noticeable then).

When I first got the Jetsons, I played around with one of them and took a deep dive on that deepstream-app. I had found a set of tuneables that eliminated this but I’m having to dig back through my notes on it and test in the k8s context. As I recall though, it was important to dial in the batch size, enable the live flag, and to make sure that the object tracker picked up the object so it wasn’t doing the object detection on every frame. I may have enabled skipping frames on the object detection.

I just got k8s working with this, and have a deployment now that launches a pod on a different node if one fails, and have vlc with “loop all” enabled to reload the stream if something happens (node fails or config update). I still need to factor the config out of the image into a configmap or some such but I’ll see if I can find the magic setting combination again. :)

1 Like

Thanks. What I am seeing looks more keyframes being missed by the decoder, leading to objects leaving trails occasionally. It mostly works after tweaking settings, but every few seconds or so there’s an issue. It works fine with many other sources, just not the NVR.

I’m using multiple full HD streams, so maybe reducing resolution could help as well. They are scaled down before they hit the inference elements, but maybe it’s the decoder struggling. It seems fun to work with other streams of the same resolution, but idk. Also my NVR is x86, not the arm box Ubiquiti sells.

I have experimented with all your mentioned settings with the exception of adding a tracker or changing camera resolutions. I’ll probably try a tracker next, even though the problem seems to be earlier in the pipeline. Thanks for the description of your working setup! It’s very useful.