I am using this timestamp
frame_timestamp = time.time() * 1000000000
Did you replace the timestamp inside the GstBuffer and batch meta with the local time?
Where(in which pad of which element in your pipeline) did you probe the timestamp function?
What is the original FPS of the RTSP streams? If you are using IP cameras, you can get the FPS from the camera’s own setup software.
while l_frame is not None:
try:
# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
# The casting is done by pyds.glist_get_nvds_frame_meta()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
frame_copy = None
source_id = frame_meta.source_id
frame_number = frame_meta.frame_num
batch_id = frame_meta.batch_id
num_rects = frame_meta.num_obj_meta
frame_timestamp = time.time() * 1000000000 #it added here only
wheel_detections, nuts_detections = [], []
Both camera are running at 15FPS
Where(in which pad of which element in your pipeline) did you probe the timestamp function?
I haven’t added anything additional in pipeline, I am just printing the timestamp in along with detection inside the while l_frame is not None
I am not concerned with the timestamp. the only issue is I am not getting the real-time frames in pipeline after 10 to 12 hours, and both cameras are not synchronize with each other.
Please make sure all your processing - detection(PGIE), your customized SGIE, your customized postprocessing, … - can all be done within 1/15 second. Or else, you can never get the frames be handled in real-time.
The nvstreammux has already provide “sync-inputs” parameter to synchronize the live streams in batch by PTS. Gst-nvstreammux — DeepStream documentation 6.4 documentation. The frames in the same batch will not have big time difference. Please make sure your cameras can output correct timestamps(PTS).
Pipeline can process 8 frames per seconds for both cameras. After modifying the camera FPS to 8, the pipeline is running smoothly without any sort of delay.
But the thing is if pipeline can process 8 frames per seconds and the camera fps are 15 or 20, it will add delay overtime, so isn’t there any way that the pipeline pick latest frame every time instead of frame store in buffer?
Please make sure all your processing - detection(PGIE), your customized SGIE, your customized postprocessing, … - can all be done within 1/15 second. Or else, you can never get the frames be handled in real-time.
I have removed all the post processesors and only a simple pipeline with pgie and sgie is still running at 8FPS.
What is the CPU and GPU loading when you running in 8FPS?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.