Hi,
I managed to do my first experiments. I connected two cameras to the Jetson Nano. Both 1600x1300 at 60fps. I use OpenCV and wrote a loop that just read frames into a buffer (C++). But it doesn’t do anything with that buffer yet.
When running this and checking the system monitor, I see that 3 of the 4 CPUs are for 70% busy! But I’m doing nothing with the frames! CPUs should be mostly idle. Only data is moved and that probably is done with DMA.
I need to write some code that will analyse these frames, but it seems I will not have enough power for that.
Also the GPU is busy (changing between 0 en 80%)! Why?
Finally the 4GB ram is used for 80%! Yes I have big images, but one image is 1600x1300x2 (max) thus 4 MBytes. I assume it is only storing 1 or 2 pictures per channel (otherwise the latency will be very bad.
Is the conclusion that a Jetson Nano is way to weak to handle his??
I was planning to switch to Jetson Orin Nano later, but then with 6 cameras!
I hope I’m doing something wrong and this can be improved a lot.
Obviously the ARM cores of Nano (derived from TX1) are not as efficient as recent desktop CPU cores.
So you would have to consider that for this amount of pixel rate you would avoid as much as possible CPU processing. With jeston you would have dedicated HW for video scaling, format converting, encoding/decoding and GPU for image processing.
You may better explain your case:
What kind of cameras (CSI or USB), available formats
Share the code you’re using for opencv (opening cameras, reading from these, displaying). Also share the output of opencv function getBuildInformation().
For better advice, tell what kind of processing you intend to do from source to sink.
The thing is that I’m not doing any processing at all! CPUs should be idle.
This is the OpenCV code:
using namespace cv;
using namespace std;
int main(int argc, char** argv )
{
VideoCapture cap1;
Mat CameraFrame1;
VideoCapture cap2;
Mat CameraFrame2;
cap1.open(0);
cap2.open(1);
// Check whether user selected camera is opened successfully.
if (!cap1.isOpened())
{
cout << "***Could not initialize capturing...***\n";
return -1;
}
if (!cap2.isOpened())
{
cout << "***Could not initialize capturing...***\n";
return -1;
}
// Loop infinitely to fetch frame from cameras.
int frame = 0;
for (;;)
{
cap1 >> CameraFrame1;
cap2 >> CameraFrame2;
// Check whether received frame has valid pointer.
if (CameraFrame1.empty())
break;
if (CameraFrame2.empty())
break;
cout << "frame" << frame << endl;
frame++;
// Wait for Escape keyevent to exit from loop
char keypressed = (char)waitKey(10);
if (keypressed == 27)
break;
}
cap1.release();
cap2.release();
What I need to do:
We have a C++ algotithm to detect dots and extract pose information from that. It runs on a 86 CPU (2.5 GHz) within 1ms per frame (on one thread). So I assume on the Jetson Nano I should be able to do that in 16 ms.
My biggest concern now is that CPU are loaded with 70% while they are doing nothing.
This might use V4L or FFMPEG backend and may result in CPU load. Since your opencv build also supports gstreamer, you may give it a try. You may tell the modes provided by your cameras with:
I did find out that there is an unwanted conversion going on from gray scale to rgb. And I am not able to prevent this by setting the format.
But I guess I should make a new thread for this problem.