Surveillance System Use Case

That is exactly the direction that sounds right for me. I’ve been a C/C++ developer for about 16 years so it is right in my wheelhouse. Great to hear that I should be able to use the nano. Do you happen to have a link to that example with 8 sources?

You should feel right at home then. The sample in question is installed with DeepStream itself:

You’ll have to install the Debian package manually until it’s added to the online repositories.

The example i’m referring to is a config for the reference deepstream-app

You’ll find the config at …

/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

… once it’s installed.

You can also just use the reference app to do what you want as the config file setup makes it quite flexible. It’s also open source so you can modify it to your liking or use the plugins to write your own app (there are examples of this as well). Likewise many (but not all) of the gstreamer plugins themselves are open source (various licenses).

1 Like

I just realized I gave you the link to a Nano example. While that will work, the Xavier (and recent GTX/RTX/Tesla) is capable of int8 precision. There is a 30 source example for Xavier in the same folder:

.../source30_1080p_dec_infer-resnet_tiled_display_int8.txt

Not sure if the NX can do that, one, since it doesn’t have the same amount of memory the regular Xavier does, so it may drop some frames. You may have to modify the config in that case (eg. add a tracker and an interval to the inference element, or just remove some sources).

DeepStream also supports the DLA accelerator of the Xavier, so you can try using that instead of the GPU to do inference (or potentially both at the same time). There are example for much of this in the samples and the reference manual for the plugins themselves is here.

If I can get away w/the nano I will. For $100 it’s a no brainer to try and at least get familiar with things. We got a Xavier AGX at work I might be able to do some testing on too if I can wrestle it away from my coworker.

1 Like

I don’t have experience with resnet, I’m a big fan of yolov3 and now playing with yolov4.

You could handle 6x cameras sufficiently well on the nano if you used something like a 640x480 stream at around 6 frames per second. Provided you ensured that you had a source of MJPEG video provided outside of the nano. Currently I use VLC to transcode the RTSP MJPEG to HTTP MJPEG. That works on the nano with 4 cameras, but each vlc process takes around 50MB ram and 4x is all it can handle. In that case I would choose yolov3 at 416x416 resolution in the cfg file. Yesterday I moved the vlc transcoding from a nano onto a raspberry pi, freeing up 200MB of ram, just sufficient to run the full 608x608 yolov4 model. It does push the detection rate up from 0.7s to 0.9s, but that’s not really a big deal. Again however, for 6 cameras I would offload the vlc transcoding to another machine and then yolov4 at 416x416 resolution. I noticed that I got more false positives on yolov4 at 416x416 than yolov3 at 416x416 for people. On the full 608x608 model both yolov3 and yolov4 and a little more resilient to false positives.

I’ll be trying an NX shortly. The Xaver AGX was sweet at 0.165s per detection full yolov4 608x608.

The analytics requires for home security are pretty basic. Usually, it’s if a person is in an area that he shouldn’t be then let me know.

@mdegans thanks again for the suggestion. Got the nano in hand and wow, right outta the box the deepstream SDK works nearly flawlessly on 4 streams to detect cars/people. Even at much higher resolution/framerate than I thought (2688x1520x15fpsx4 cameras). This is all just with the sample app. For those following along, I copy/pasted the

/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

He linked and just added my four sources as rtsp feeds. Very cool!

1 Like

YW, Yeah, you can do a lot with a single Nano and the Deepstream examples are pretty well optimized, so they’re good places to start from. If you run into any issues or have any questions, there is a DeepStream board on this forum where you can communicate directly with the developers.

1 Like

@cclaunch Have you been able to deploy yolov5 in deepstream? I am facing this issue. It would be really helpful if you can help in any way

I have not gotten there yet. That is on my radar though to try.

1 Like

More is not always better. For example, yolov4 has a higher maps score than yolov3. But this is over 80 categories whereas for surveillance it’s primarily the score for just people or maybe also cars (which always match well) that would matter. I’ve found that yolov4 has a tendency to want to see more objects than yolov3. This tendency is easy to see if you play it on a few images and compare with yolov3. It will correctly see more smaller flower pots for example. It’s conceivable that by correctly seeing more smaller objects and objects that are not people you gain a higher overall accuracy whilst the accuracy for just people goes down due to increased false positives. I’ve found that in my long term experiments yolov3 at 608 appears to be slightly less likely to fire false positives on people than the newer yolov4 at 608x608. It’s one thing just looking at a running video and saying wow that looks great, but if you are going to be woken up in the middle of the night then resilience to false positives is really important. I was testing yolov4 on a few beta sites but switched back to using yolov3 after getting too many false positives on people.

But ok, surveillance use case can have many meanings. Maybe the intended meaning for this thread revolves around people watching screens. In which case it’s a lot less critical.

1 Like

@KimHendrikse I agree and actually am just running the stock resnet10.caffemodel_b8_gpu0_fp16.engine included with deepstream for the nano and turning off the Roadsign class. I currently have it operating on 4 full resolution streams at 15fps and very few false positives. I’m working on tweaking the deepstream sample app to my liking so I can feed triggers from it to Blue Iris. I’ll also use the output stream to a display as my primary means of monitoring. Works great so far!

i see you mentioned LoRa is this as a HAT on the Xavier being utilized as a gateway or a completely separate system based on a Lora device ?

No. I have a lorawan gateway based on a raspberry pi and a swiss board. One of the earliest howto’s on making a gateway. But my home security system takes event inputs from both AI from yolov3 and darknet as well as modified PIR sensors that I convert to lorawan based PIR sensors.

I’m actually ramping up to making an official release of my security that installs easy on jetson devices. Later I will also publish howtos to add Lora devices as inputs. This is easy on the jets boards as it’s simply an MQTT subscription that runs fine on the Jetson boards.

Kim

Thanks, my issue is getting something LoRa working on the Xavier as a HAT… any ideas / pointers are welcome.

But what is this something ? A lora device or a lora gateway ? It it’s a gateway I don’t understand why you would use a jetson device for this, there’s no need for the GPU, if it’s a device then it’s a really big device. Lora devices are usually small battery powered devices.

So to me the only use case for a jetson single board computer in combination with lora is as a processing board for some lora generated events. In that case, it’s clear, you use the MQTT clients, such as a python one. And that’s easily and make’s sense as well. But to test you still need the lora device and a gateway close enough to receive the generated packets.

I don’t understand the use case to be able to offer much effective help I think.

Kim

So the only use case that’s make’s sense to me in the context of a Jetson board doesn’t need any additional hardware, such as a hat???

You probably need to explain more what you are trying to do.

Kim

while you make a good point, and i concur, it should be a separate device, there is a need for an integrated system that does more then just LoRa gateway, and data processing, other apps / processing data and widgety things. so basically an Xavier / LoRa capable device.

Ah ok. It wasn’t clear to me that you were talking about a loraway gateway. Indeed, that could be quite interesting. It could make a base station for AI based security as well as lora devices relay. In my case, that could make a device that could see people with AI as well as receive close by lorawan enabled PIR, IR beam, smoke detectors or other devices potentially. Nice.

Check this out:

This is an interface to a Swiss made lorawan gateway. In principle the hat is simply a linkage between the SPI ports as I understand it so the code that works on the raspberrypi should be quite portable to the embedded jetson device I expect.

Kim

Swiss made Lorawan tranceiver board might be more accurately stated. It becomes the gateway with the software on the host.