Hi,
We put in place a lot of tests and the more significant ones are the following
Test 1)
We use only 1 rtsp source camera using 15 FPS,
We had following setting to reduce a little GPU load
-) “interval=1” for Primary-Gie
So at the end we are processing in primary-gie about 7/8 real FPS
We are using YoloV3 network architecture because it has a good accuracy detection.
GPU was about 75%
We setup following chain where instead of sending stream to external rtspClient we write stream to a videoFile
1-VideoCamera(Rtsp) -> | GstRtspBin -> GstDecodeBin -> Primary_Gie_Bin -> Tracking_Bin -> OSD_Bin -> Sink_to_file(recording)
and results seems good, … in attachment you find file where we see only the camera
“Car_cam__AI_track__correct.PNG”
that represents a video frame obtained after some seconds of persons moving in our scenario.
So on this frame persons are detected and frame is not corrupted
So we could say that using dump to local file Inference/Detection and Tracking do not create frame corruption, … now we are checking this when sending to local network.
Test 2)
Using another TX2 card, … in the same time of test 1 we got the same identical video recording from rtspClient in local network but using 2 source cameras with 15 FPS for each cameras so having 30 FPS total.
But using this interval setting because we need to have 2 cameras and this is the only way to have it.
-) “interval=3” for Primary-Gie
So at the end we are processing in primary-gie about 7/8 real FPS because we do 1 frame every 4
2-VideoCamera(Rtsp) -> | GstRtspBin -> GstDecodeBin -> Primary_Gie_Bin -> Tracking_Bin -> OSD_Bin -> SinkBin(RtspServer) | -> RtspClient(recording)
And in this case we got the same old corruption represented in following videoFrame
“Car_cam__AI_track__corrupted.PNG”
So difference vs previous local-File chain is that here we have 2 cameras with more FPS, producing more GPU load ( about 99% ) and we have gstRtspServer, rtspClient and local network, … so now we are checking following variables to isolate the problem.
-) substituting rtspClient with VLC
-) reducing source camera frame rate, … to reduce TX2 high GPU load and frame latency
-) verify there is no latency on the network.
QUESTION-1: we have TX2 CPU at 30% but we have GPU that is very high up to 99% … so could it be that, when TX2 cards has heavy GPU load, frames are highly delayed by Inference/Tracking and so frames are lost by client ?
In this case is there some GstRtspServer setting (buffer or other) that could help to manage frame latency ?
Consider that we already have:
-) “batch-size=4” for streammuxer and Primary-Gie
-) “interval=3” for Primary-Gie
QUESTION-2: could it be that now having interval=3 instead of interval=1 so primary-gie could cut some frames ?
Thankyou,
M.