I’m using the jetson-utils library to capture frames from a CSI camera. When I place the function to call for a frame on a separate Python Process, I get a “nvmapmemcachemaint” error. The frame output is just green and no new frames are fetched after the green one.
I don’t think there’s anything wrong with the camera since the same function works fine if put on the main process. Is it something to do with CUDA? Any advice would be very useful.
You maybe having problems here in new process vs new thread.
In python if you make a new thread then the new thread has access to all resources the main thread has but if you spawn a new process then it is a new process separate from the main process and can’t access the resources in the main process, it can only access the any resources that is pasted to if when it was spawned.
I would consider whether you need a new process for your use case or whether threading will fill your needs.
You have an interesting point, but I have a long-running CPU-bound task that is hypothetically running on my main process. I wanted a camera feed that displayed quicker than the task on the main process. Python threads are locked by GIL and wouldn’t help in speeds if tasks are CPU-bound. That’s why I’m using Process.
Maybe I should somehow create the camera object within the new process before executing? I’ll try that and threading.
@ShaneCCC Thank you for the code, but I found that the standard gst pipeline for cameras causes noticeable lag. I tracked it down to videoconvert to BGR causing the high CPU processing times. That’s why I’m using jetson_utils to grab the data from the GPU and processing it from there.
This is what I would try first. However, jetson.utils uses CUDA memory which is not cross-process, so you may run into issues with that (perhaps that is already the issue)
Also, jetson.utils.gstCamera and jetson.utils.videoSource are already multithreaded inside their C++ implementations, so I don’t think you will gain that much by trying to thread them again from the Python side. Instead you can use the timeout=0 argument to Capture/CaptureRGBA() call, and the function will return immediately if no frame is ready (and will throw an exception otherwise, so you would need to catch that)
Also, it’s recommended to update your code to use jetson.utils.videoSource instead, as using jetson.utils.gstCamera directly is deprecated.
I tracked it down to videoconvert to BGR causing the high CPU processing times.
This seems to be a common issue that I see from all my googling of bottle necks in camera pipelines is using videoconvert to convert BGRx to BGR. Converting from BGRx to BGR is only a case of slicing off 1 colour channel. Is it better to grab the frame in BGRx then slice off the extra channel in code to release gstreamer to process the next frame? To do this at python level image_without_alpha = image[:,:,:3]
I looked back at the code I posted, and I already create the camera inside the function of the new process.
I attempted to move the “import jetson_utils” inside the function, but still facing the same problem.
Next time I have the NX, I’ll update to videoSource or tryset_start_method('spawn') before the Process to see if it helps (ref). If not, I will just have to let it run in the main process and work around it.
@Out_of_the_BOTS I personally found significant reduction in camera-to-display latency when grabbing straight from nvvideoconvert rather than using OpenCV’s gstreamer.
My previous history is with robotics mainly on microcontrollers, because of the such constrained environment you tend to try to save every machine instruction and byte of RAM possible.
It seems that Nvidia has mainly supported the sony IMX range of camera sensors whcih only output raw bayer data and everyone is using gstreamer to convert the bayer to BGRx then converting that to BGR. With emmbedded devices we tend to use the Omnivision sensors (like the orginal version of RPi cam) as the camera sensor can output both raw data and it has onboard processing to output RGB data. Having the camera already doing this processing fro you saves the processor that is reading the sensor having to do that processing. I did see that ridge run created a Jetson Nano linux driver for the Onivision OV5647 sensor as this the original RPi camera v1 sensor and what all the after market RPi camera modules use. see Ov5647 Camera Module For Raspberry Pi 3b 4b 3b+ Adjustable Focus 120 130 160 Degree 3.6mm Hd 5 Million Pixel Night Vision - Integrated Circuits - AliExpress if you can the that sensor up and runnign then you won’t have any of that processing to do on the Jetson
@Out_of_the_BOTS RGB/BGR data will typically be bigger than Bayer and hence there are savings to bandwidth to use the original Bayer encoding. The issue is when CPU-only GStreamer elements get used for the colorspace conversion. In jetson.inference/jetson.utils, these conversions all happen in hardware or in CUDA on the GPU with minimal memory transfers, so the processing times and latency are negligible.
I have used both the IMX and the OV sensors for computer vision on RPi as well as the OV on microcontrollers. The OV can do both modes RAW and RGB and some of the OV cameras can do JPEG compression on the sensor.
The other thing that I found better with the OV camera was that it can do smaller images resolutions at higher frame rates. The OV has a register that lets you control skipping pixels and only sending every second pixel or third pixel or 4th pixel. I used this with a competition line following robot on an RPi. As frame rates go up the robot can go faster while staying on the line. It is my understanding that Bosten Dynamics’ Spot computer vision is at 90 FPS. When I updated the camera from RPi cam V1 to RPi cma v2 my robot couldn’t go as fast because I couldn’t get the same high frame rates from the V2 cam. The V2 cam also bins/crops more than the V1 cam so you both get a smaller FOV and are using less of the sensor.
Is there any plans for Nvidia to add support for the OV cameras as they are a more common sensor used for computer vision. RPi has support for the OV sensors so I am not sure if the work they have done developing the driver can be ported to Jetson easily. .
Is there any plans to support any of the OmniVision sensors as they design for the embedded market and tend to have features more allied with embedded needs as well as modules using OV sensors generally being much more available than the IMX range. See https://www.ovt.com/
If support for one of the OV sensors is added then most of the hard work is done for adding any other of the OV range as generally all the sensors will have most of the registers setup the same way
Yes Raspberry Pi no longer use OV5647 sensor (although they still support them) and have switched to IMX219 sensor but this doesn’t mean the manufacturer has stopped making their most popular selling sensor or that every other user has stopped using them just because RPi stopped using them please have a short read of the features of this sensor see OV5645 | OMNIVISION
I think I fixed it. Taking the original code I posted before, if you set mp.set_start_method('spawn') before anything else launches, it works perfectly fine.