In the attached image, you can see what should i do and if not, I should leave my job.
With the conditions I have written in the picture, I do not know the way and I am confused.
How do I use EGL? Should I do this at all or is there another way?
For which part of the image is there a code example with these conditions?
Should I remove X Server too?
We have DeepStream SDK for running deep learning inferences. Please look at
You can install the packages through SDKManager.
1_By using DeepStream, video framebuffer is always in GPU memory or it is possible to transfer it to CPU memory and do some process on it?
2_Is this possible to record audio with Deepstream?
3_Is Deepstream based on L4T Multimedia APIs ? (https://docs.nvidia.com/jetson/l4t-multimedia/index.html)
4_ Is this possible to capture and record 8x1080P60 by DeepStream APIs?
Sorry, and another question
TX2 can encode 8 1080p30 streams.
Is this done by CUDA cores? Or there is a coprocessor and HW codec for it?
For more information, please check the documents:
You can map the buffer to CPU memory through NvBufSurf APIs
It is possible. By default it is not in deepstream-app. You can look at gstreamer documents and integrate the audio part.
No. It is NvBufSurf APIs, which work on both desktop GPUs and Jetson platforms.
L4T Multimedia APIs are specific to Jetson platforms.
No. It is more stable in 30fps.
There is an individual hardware engine NVDEC.
Can you tell me which way is better? L4T Multimedia APIs or Deepstream APIs
Differences and advantages of each
If your object tracking algorithm is a deep learning model like ResNet or Yolo, we would suggest use DeepStream SDK.
Thanks a lot
What is the latency for a tracker algorithm like yolo? For example, is a latency about 2 frames per second possible for a 1080p60 image? Is there a report on these issues?
We have benchmark of Nano:
Don’t have data of TX2. You may follow the instructions to run TX2.
In running deepstream-app. There is a reference config of 12 sources. FYR.