Hello all. We are evaluating moving from AWS Kinesis/SageMaker to an edge based system for video object recognition. I’ve been going through the tutorial “Real-time object detection in 10 lines of Python code with Jetson Nano”, which is awesome but I don’t really get how this code is using Nvidia SDKs vs. custom repo code for the “Hello AI World” sample. For example, the sample references an API called detectNet. Is this part of the SDK or just custom sample code ? Is Jetson.inference SDK code or sample code ? Is this at all related to DeepStream ?
Ultimately, what I want to do is connect to multiple IP cameras, run inference on Tensorflow models, and push detection results up to AWS S3, as captured video clips with bounding boxes. Will the Hello AI World approach work for this or do I need to use something else ?