jetson.utils.videoSource using /dev/video0 returns no frame data when called in a Thread

OO_test.py (2.0 KB)

See attached self contained test code. Running this on a Nano returns black frames. However running the example code from Nvidia on the same Nano works like a charm. Those are basically the same API calls. For some reason when running from a class implementing a Thread object, the API returns no data inside the frame. Also, I had to use Capture() to start the stream, using Open() never succeeds (see code, it will make sense).

Hi @andrea_Faction, I’ve not tried using these objects inside Python threads, as the videoSource interface already uses threading internally (inside it’s C++ implementation), so not sure additional threading from the Python side is needed. Does the same program flow work without threading?

A couple things to try:

  1. As a test, inside Run() comment out the other stuff besides display.Render(). Is the display still black?
				self.display.Render(img)

				#detections = self.net.Detect(img)

				# do something with the detectors

				# convert to jpeg and ....
				#jetson.utils.cudaDeviceSynchronize()
				#image = jetson.utils.cudaToNumpy(img)
				#encoded = cv2.imencode(".jpg", image)[1].tostring()
  1. Starting the stream with Capture() in another thread shouldn’t be necessary, as the stream will automatically start the first time you call Capture(). So skip start_streaming() and just do that inside Run(), and try creating the resources inside the there too. Something like this:
    def run(self):
        camera_URI = "/dev/video"+str(self.cn)
        self.cap = jetson.utils.videoSource(camera_URI)
        self.display = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file

		while True:
		     img = self.cap.Capture()
		     self.display.Render(img)

thank you @dusty_nv . Neither made a difference so what I did is I started the streams from two cameras from the parent python application (an Ubuntu service I am starting at boot) that spawns other processes (threads) and that worked. I have modified my camera handler multithreaded class to to the processing on trigger as it “receives” a new frame from the parent. Obviously not a proper OO design, I wanted to fully encapsulate what a camera handler object does since I have a number of functionalities in addition to it that are fully encapsulated. Any chance we can get to the bottom of this? The above works but frankly it is not clean. Please and thank you.

Hi @andrea_Faction, glad that you found a way to get it working. I don’t plan to really dig into Python threading of the objects since it is an uncommon use-case, sorry about that. Such multithreaded applications would probably be a better fit for C++, which already incorporates the threading.

Fair enough, I can agree to that line of thought. Thank you.