Newbie confused by Jetson software packages

Hello all. We are evaluating moving from AWS Kinesis/SageMaker to an edge based system for video object recognition. I’ve been going through the tutorial “Real-time object detection in 10 lines of Python code with Jetson Nano”, which is awesome but I don’t really get how this code is using Nvidia SDKs vs. custom repo code for the “Hello AI World” sample. For example, the sample references an API called detectNet. Is this part of the SDK or just custom sample code ? Is Jetson.inference SDK code or sample code ? Is this at all related to DeepStream ?

Ultimately, what I want to do is connect to multiple IP cameras, run inference on Tensorflow models, and push detection results up to AWS S3, as captured video clips with bounding boxes. Will the Hello AI World approach work for this or do I need to use something else ?

Hi @brking, the Hello AI World code is essentially a wrapper around TensorRT and CUDA, so that it is easier to use and get started with. It doesn’t use DeepStream, but you can use DeepStream separately for higher performance (for example, if you have multiple camera streams to process)

If you were to dig into the implementation details of Hello AI World, the imageNet/detectNet/segNet classes are all derived from the base tensorNet class, which is where the TensorRT code is.

https://github.com/dusty-nv/jetson-inference/blob/2fb798e3e4895b51ce7315826297cf321f4bd577/c/tensorNet.h#L205

For the use-case you are ultimately going for with the multiple IP cameras and TensorFlow models and AWS integration, I would recommend looking into DeepStream as it is purposely designed for that kind of edge-to-cloud multi-stream video analytics system.

Thanks, this is very helpful. Does DeepStream provide wrappers around TensorRT as well which I would use instead of the detectNet code used in the Hello AI World sample or would the same detection method be used with DeepStream ? In other words is DeepStream complimentary to that method or is it a whole different way of doing inference ?

Yes, DeepStream also uses TensorRT under the covers, so you don’t need to program TensorRT yourself if you are using DeepStream.