Building openCV from source on jetson tx2

i’m building openCV from source to have it running on the GPU instead of the CPU by enabling Cuda and openGL etc … is it recommended or not ? does this require to not install openCV when flashing the jetpack ?

Aslo to use the jetson.inference library howa can i install it ?

Hi M_okashaa,

We recently tried to build OpenCV from sources to enable CUDA acceleration on TX2 but we couldn’t finish due to memory.

We tried the same on Xavier, the compilation finished but we didn’t get good results running OpenCV on GPU, the CPU usage indeed improved but the framerate on our application was almost the same that running the application with CPU.

You can install OpenCV and Jetson inference (I’m not sure) from SDK manager, in the Jetson components section.

Greivin F.

Yeah. I have one of the more popular OpenCV build scripts for Tegra on GitHub, but I wouldn’t recommend actually using it for much of anything

Many tests fail if you run them and performance is poor, even with GPU stuff. I highly recommend taking a look at Nvidia’s alternatives instead. OpenCV just wasn’t designed to take advantage of some of the things you can do with Tegra.

Edit: oh, and you can install jetson-inference from here:

Hello @greivin.fallas and @mdegans
first ,thanks for your reply . it’s appreciated .
i will consider your replies but can you please recommend any alternatives .the purpose of my project is to monitor the status of the driver in the car. the pipeline is to capture a picture from the live stream , preform face detection , feed the faces to deep CNN to make the classification .

i’m using python , openCV for video capturing and face detection and tensorflow for the classification . the real time performance wasn’t good as there was 6 seconds delay .

any recommendations will be appreciated

Monitoring the driver status actually seems like one of the few good applications for this kind of tech. Such a smart dashcam could actually save lives by raising an alarm and calling emergency services if it isn’t silenced. People fall asleep or have heart attacks all the time.

I would recommend DeepStream. It’s highly optimized and can do what you want. Only downside is a steep learning curve, but the folks in the DeepStream forum are very helpful and the examples are good. I can recommend some training materials if you wish.

i will put deepstream into my consideration and also will depend on the DeepStream forums but it will be appreciated if you could recommend any learning material that can help in this project

Even if you don’t know C, I would highly recommend following the gstreamer tutorials on C to start.

https://gstreamer.freedesktop.org/documentation/tutorials/index.html

My suggestion would be to retype the tutorials so they stick in your memory. Even if you don’t know C and even if the framework isn’t friendly, you will learn. Basic use of a c compiler is explained in the first tutorial if you don’t already know how to compile a .c program.

The reason I suggest to learn it in C is because gstreamer is written in C and looks like C in most languages, including Python. It will behave in ways you won’t expect and which will only make sense if you understand what’s going on under the hood (eg. with GLib’s MainLoop). Once you learn what’s going on, it is probably easier and safer to write it in another language.

When you are done with the gstreamer tutorials, DeepStream can be installed on Tegra since JetPack 4.3 with sudo apt install deepstream-4.0. All the sample code can be found under /opt/nvidia/deepstream/....

If it sounds like a lot of work, it is, but if you want your thing to run fast even on an actual potato, accept no alternatives.

Hi,

I don’t have experience with deepstream but as far as I know, It would help you to run inference faster.

I don’t know any alternative to run face cropping similar to OpenCV and I’m not sure if this is the element that inserts the long delay but at least for inference using TensorFlow we have an open source project that uses the CPU/GPU to run inference on videos using GStreamer, this may be an alternative to the inference section. I don’t have benchmarks for latency but I’m sure it’s quicker than 6 seconds using GPU processing.

GStinference: https://developer.ridgerun.com/wiki/index.php?title=GstInference/Introduction

Greivin F.

So, in DeepStream, one way to do it would be to have have a primary inference engine (nvinfer) do the face detection. What that will do is attach metadata about where faces are to a buffer. That buffer, and attached metadata, are sent downstream in the pipeline to a secondary inference engine (also nvinfer, but with a different config) which in your case would do the alertness detection on that part of the buffer. No cropping is required. The actual buffer isn’t modified at all (unless you want) as it passes down the pipeline. You will probably want a tracker in ther somewhere as well to improve performance so inferences don’t need to be performed every frame.

Nvidia also includes various message brokers you can use to send data to, some database instance or some other software you can use to raise an alert in case of driver drowsiness. The benefit to doing this with Tegra is you can do all this at the edge and it doesn’t have to rely on cloud based services or a subscription, so a data connection (other than maybe wifi to the dashcam) is not required. Gstreamer already supports various network sources, so connecting it to some WiFi connected dashcam shouldn’t be a big deal.

Hi M_okasha!

I have built OpenCV from source on a TX2 board with CUDA support. Please follow the next link for details:

Some of the problems regarding space on the device are due the build of examples and tests that are not needed in some cases.

In case of doubts, just let us know!

Regards!

I’d recommend having a swapfile mounted somwhere if you try to build OpenCV. You might also wish to turn off x11 temporarily with sudo systemctl isolate multi-user.target. I am able to build OpenCV without swap on Xavier, but that machine has 16Gb or ram. It will fail for sure on Nano without swap. I am not sure about Tx2.