AWS setup for Jetson devices

Hi,

Currently, I am using Jetson nano for video recording as well for inference. As Inference is taking a longer time, I want to upgrade the device and test. Is it possible to set up a similar/same HW setup in AWS? That way, I can check which Jetson device suits my purpose, and accordingly I will go for purchase.

Thanks
Karun

Moving to Jetson Nano forum

Hi,

AFAIK, AWS only has desktop version GPU.
So you may not be able to find an integrated-type GPU on AWS.

We do have some performance data for different type of Jetson.
Hope this can give you some idea:

Inference only: https://developer.nvidia.com/embedded/jetson-benchmarks
Multimedia with inference: Performance — DeepStream 6.1.1 Release documentation

Thanks.

Thanks @AastaLLL for quick response. I am currently using Nano for pose estimation, and is taking around 1 seconds for inference logic with resolution of 1280X720 frame.
Though I am recording 120 FPS, I want to process 30 FPS for pose estimation.One option I have is to go with TX2 or Xavier. The other option is to have logic to do delayed processing. Is it possible to have two queues while recording video, one would dump video in physical location, and another would push all frames to Queue, which would be eventually be consumed by inference logic to process.

Hi,

Please check our Deepstream SDK:

There is a mechanism that you don’t need to do the inference every frame but with a pre-defined a interval.
Does this meet your requirement?

Thanks.

Hi,
I required to process all frames, but I am not looking for real time image analysis. I am looking for some solution where saving of video be in one pipeline, and another pipeline be used to send frames to some kind of Queue. The Queue images be used on another end of inference logic which would does image analysis.

The other option is, generate video in .mkv or .ts format. In this case, can I have one more program which consumes in-process generated video file for inference?

Thanks

Hi @AastaLLL,

To elaborate, is it possible to use Kafka message queue to use as a repository for frames,


The same be used to consume at the inference side?
In the diagram, can you please help me how to approach mentioned points 1,2,3

  1. How can I have two queues so that, the output of one queue will record the video and the second will push the frames to the python program or to Kafka broker using Deepstream?
  2. Do you see any issue if I start consuming image data from the message queue to process it? I am referring in terms of loss of data or in terms ofperformance

Hi,

It seems that your use case is more similar to our smart video recording example.
Could you check if this example can meet your requirement?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Smart_video.html#smart-video-record

Thanks.

1 Like