3 camera @60FPS 1600x1300 flicker issue

Hi,

I don’t know if this, again, is for this forum but I try and see if anyone can help.
I made now a program that seemingly can provide 60FPS stable recording with three cameras.
But with the shared-memory I use and the thread lock on Read/Write for the threads I get
a flicker on the screen and on the images recorded to disc. It looks like I have only half of the information when writing the images.

I have attached the source code I have made so far.
Any ideas on how to rewrite so that I keep the speed on three cameras and get rid of the flicker on screen and
especially the error on the images. Would a queue solve it which is thread safe?

Link to source (gdrive): Source Code

It’s USB camera?
Try boost the system to try.

sudo nvpmodel -m [MaxN]
sudo jetson_clocks
sudo su
echo 1 > /sys/kernel/debug/bpmp/debug/clk/emc/mrq_rate_locked
cat /sys/kernel/debug/bpmp/debug/clk/emc/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/emc/rate

echo userspace > /sys/devices/13e10000.host1x/15340000.vic/devfreq/15340000.vic/governor

Hi,

Thanks for answering. Sorry for not providing all information. Very clear in my head :)
Yes. Its USB 3.1. . I did set the mode to 2 (15W 6 cores) (NX). I ran the jetson_clocks and the echo’s.
Seemingly no effect.

I have linked an image to show to problem.
What I believe still is that there is a problem reading and writing to my memory from the cameras and one visualization thread.
Image result.

I instantiate my memory array Memory stack and is created in my main program.
This is a class object in Memories called shared_memory. (don’t mind my nomenclature of private etc. its a bit confusing)

MemoryStack=[
    Memories._shared_memory(),
    Memories._shared_memory(),
    Memories._shared_memory()]

The Camera uses this after reading with cv2.imread()

 self.SharedMemory.put(newFrame,self.ind)

The Visualization does this:

self.Lock.acquire()
    image1,index1 = self.memory[0].get()
    image2,index2 = self.memory[1].get()
    image3,index3 = self.memory[2].get()
self.Lock.release()

if int(self.properties.numberofcameras)==1:
    fullImage = image1
elif int(self.properties.numberofcameras)==2:
    fullImage =cv2.hconcat([image1,image2])
elif int(self.properties.numberofcameras)==3:
    fullImage =cv2.hconcat([image1,image2,image3])

This is my memory class. The lock comes to play when using from get and put methods. I think the lock I have in my visualization is reduntant since I use that already in the get method.
So every camera uses put and get. BUT I don’t use thread-lock on the Array in the main program.
So perhaps that is the problem that the camera threads try to read from MemoryStack which is not
thread safe but every instance in the MemoryStack is protected with a lock when calling get/put.

class _shared_memory(object):
    def __init__(self):
        self.tLock = threading.Lock()
        self._s_memory,self._index = None,None
        
    @property
    def _variable1(self):        
        return self._s_memory

    @property
    def _variable2(self):
        return self._index

    def get(self):
        self.tLock.acquire()
        v1 = self._variable1
        v2 = self._variable2
        self.tLock.release()
        return v1, v2

    def put(self, value,index):
        self.tLock.acquire()
        self._s_memory = value
        self._index = index
        self.tLock.release()
        return

Hope I make sense here…Perhaps the source code I linked to can shed better light.

Sorry, I don’t familiar with shared_memory. However did you try single or two cameras.
Or confirm with gst-launch-1.0 v4l2src …

I have tried with 1 and 2 cameras. And I did not see the flicker. I then tried with 3 cameras and it came directly. So That kinda indicates that the system can manage threading with 2 but three seems to create timing issues.

I will try with gst-launch. But my recollection is that it works, but I will try again just to ensure that the problem is not with something else than code.

So. I have tried with GST-launch and that works perfectly. Very nice fast images with no lag.
I have also changed the code so that instead of one list of Memories I passed individual
memories of class SharedMemory.

Since I believed that each memory then would have its own instance of the thread.lock I assumed that it would work for the Viewer to read from each image in sequence. But it does not work.

I have created
memory1 = Memories._sharedMemory()
memory2 = Memories._sharedMemory()
memory3 = Memories._sharedMemory()

I pass (as reference, I think) to Camera thread 1,2 and 3. Then also to the Viewer thread.
That would mean that each camera thread writes to its own independent memory and the visualizer then pick that memory one by one in sequence to form 1 large to show on screen.

Same result. Flicker and error.

It could be involve memory copy cause the problem.
I would suggest find another way for your case.

I have been trying to find another way now.
I look into creating one combined Gstream using gstreamer composer. I have the pipeline working to xvoverlaysink and seems to work for appsink too. This would mean that in later stage I have to cut the image in three when performing inference. I have also read that this approach was not recommended, but I dont know why.

Do you have a suggestion for another way? Other than Deepstream?
Is gst.elementfactory approach better? Incertain if you get the image using cv2.imread()

Could you give the link for don’t recommended?

Perhaps they talk in general and the approach of having one huge pipeline with 3 cameras would work.

Please descript your use case clearly that could be great to give advice.

My use case:

I noticed that this is for the Jetson Orin NX. Its not my platform. It is the Jetson Xavier NX. Maybe move this topic?
I have 3 cameras 1600x1300 @60fps each (2MP) (E-con systems ) that I would like to stream to screen and to disc. All frames should be dumped to nvme memory and present to user on screen. (Like a regular video recorder you might say.)
The second option is to take each frame from each camera and run object detection using mobilnet and tensorflow 1.15.
Option 1 - only save all images to disc.
I also need to pause and resume the video feed from all three cameras to be able to set proper focus.
I have made some sort of thread diagram attached

Option 2: include AI
.

Maybe can check if MMAPI have sample code to reference.

That would be highly appriciated!

This is the Gstreamer I use:
gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw , width=(int)1600 , height=(int)1300 , format=(string)GRAY8 , framerate=(fraction)60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), width=(int)1600,
height=(int)1300, format=I420’ ! tee name=t t. ! queue! nvjpegenc ! multifilesink location=‘/home/user/nvme/Images/Image01_1%09d.jpg’ t. ! queue
! nvvidconv ! ‘video/x-raw, format=BGRx’ ! videoconvert ! ‘video/x-raw, format=BGR, width=(int)800 , height=(int)650’ ! videoconvert ! appsink

/nvme/ is symlink to the nvme mount.

Hello @DaneLLL
I found this: Video scaling with GStreamer element 'videoscale' and 'nvdrmvideosink set-mode=1' - #4 by bruno.kempf
I was thinking that my gstring above is overly complicated. I want to save at 1600x1300 but present on screen at smaller scale. But just to ensure a proper stream to disk I was thinking that I could optimize the pipline.
I got this to work:

gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 ! ‘video/x-raw(memory:NVMM), framerate=(fraction)60/1’ ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -ev

This produces 60fps for one cam ofcourse. But this time I am In NVMM and I would assume the rest of the pipline would be faster and not copying the image so much?!?

But when trying with GREY I get green image. (Its distored, see image last in this reply)

gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 ! ‘video/x-raw(memory:NVMM) , width=(int)1600 , height=(int)1300, framerate=(fraction)60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), width=(int)1600, height=(int)1300, format=I420’ ! nvvidconv ! ‘video/x-raw, format=BGRx’ ! videoconvert ! ‘video/x-raw, format=BGR, width=(int)800 , height=(int)650’ ! videoconvert ! xvimagesink -e

I should be able to skip a few steps but dont know how. The 12_camera_v4l2_cuda sample works perfect:

./camera_v4l2_cuda -d /dev/video0 -s 1600x1300 -f GREY -c -r 60

[INFO] (NvEglRenderer.cpp:110) Setting Screen width 1600 height 1300
----------- Element = renderer0 -----------
Total Profiling time = 1.94972
Average FPS = 59.4958
Total units processed = 117
Num. of late units = 116

Any idea how to get that into the gst-launch? Also. How do I translate that into a python cv2 pipeline.

Hi,
GRAY format is not supported in nvv4l2camerasrc. Please use v4l2src plugin.

:(
I was of the understanding that it was possible cause it seemed like the Cpp sample did conversion.
Crap…
Any idea how to speed up the pipline above? I just need to save FAST to disk and show something on screen.

Do you think that a Queue in front of the t-branch could solve it?
I just cant imagine why there is this interference when reading the images and storing them with a queue-lock. Perhaps the camera buffer is not locked and somehow being accessed while camera is writing and thread is reading.

… Never mind. I already had Queue after tee.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.