Hello,
I am using Nvidia 10 lines of code tutorial to make my application. This is the link for the tutorial.
I need to feed the frame provided in this tutorial into opencv-functions. In more details this is the line in the tutorial that gives the frame:
img = camera.Capture()
But open-cv functions give error when fed by the img object. How do I fix this problem?
I need to use opencv for further processing of the frames.
Thanks so much for the timely reply.
I need to develop a large program. Is there a resource that I can use for all such commands that I need to feed variables from cuda into opencv or the inverse?
I am developing a program that can measure car speeds accurately in real time.
I need to use sparce optical flow, corner detection, edge detection and other functions in open cv in real time.
I decided to use the nvidia jetson hardware and software.
I want to use your 10 lines code tutorial as the starting point and use opencv functions and do the processing as fast as possible. Am I following a meaningful path? How can the processing be as fast as possible on jetson platform?
Sure, I would recommend looking into using the CUDA-accelerated version of OpenCV for those algorithms, or check out NVIDIA VPI (Vision Programming Interface) which is fast and has Python bindings.
For OpenCV + CUDA, you can use the latest l4t-ml container (for JetPack 4.6) which has it pre-built, or this script to build it yourself:
Thank you for your response. Where can I find sources for CUDA-accelerated opencv and Nvidia VPI? I need to use opencv significantly. I need documentation to run the commands.
I need to load the resnet10.caffemodel model (it has 4 classes: car, bicycle, person, roadsign from deepstream package which is a good quality network for detecting cars and people) into the detectnet-camera.py python file (located at /home/nx2/jetson-inference/build/aarch64/bin). So that I can feed the frame and detected box of the cars into opencv functions. Bit it gives me TensorRT error. This is my command line code and error. How do I fix this problem?
The VPI documentation is here: https://docs.nvidia.com/vpi/index.html
The OpenCV documentation is done by OpenCV, try searching for âopenCV CUDA pythonâ
You need to specify the correct names of the layers for your model using --input_blob and --output_blob
I changed the name of the input and output layer using the resnet10.prototxt file. The name of the very first layer is input_1 and the name of the very last layer is âconv2d_cov/Sigmoidâ. The network some times runs and some times fails. When it runs, nothing is detected.
The location of the labels and the network files on the jetson is:
" /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector"
Why nothing is detected? How can I solve this problem.
Many thanks.
I havenât tried this model before - you would want to check that the pre/post-processing matches what is done in imageNet.cpp. For example, the mean pixel subtraction coefficients and any normalization that is supposed to be applied.
I donât know how to do this or this will work or not.
Deepstream has python bindings. I need to use a USB camera. I found the deepstream-test1-usbcam app in python.
Is there a way I can get every frame in this app and apply opencv functions to the frame ? Please advise.