Hello Jetson team and developers,
Our Problem: To encode and transmit real-time images processed (by NPP) and stored in device memory ( allocated by cudaMalloc()) at Xavier with H264 codec format.
An approach tried: We copy the image from device to host and using FFMPEG RTSP, it works but too slow.
Approaches to investigate:
A. Using gstreamer RTSP directly on device memory, and take its hardware encoding advantages.
B. RTSP using DeepStream SDK (we guess the performance will be similar with that in A, is this guess correct? )
For approach A, I found an example at
But it transmit a video file with g_main_loop, are there any examples to use a for-loop to do the frame-by- frame process?
I also found there is a discuss on how to directly access cuda memory by GstBuffer at here
To be clear, i do not want to change my current architecture under the g-streamer hood, i only want to do rtsp for a frame whenever it’s produced by another thread.