Using 10 lines of code tutorial, feed the frame into opencv

Hello,
I am using Nvidia 10 lines of code tutorial to make my application. This is the link for the tutorial.

I need to feed the frame provided in this tutorial into opencv-functions. In more details this is the line in the tutorial that gives the frame:
img = camera.Capture()

But open-cv functions give error when fed by the img object. How do I fix this problem?
I need to use opencv for further processing of the frames.

Thanks,
Farough

1 Like

Hi @farough, please call jetson.utils.cudaToNumpy(img) on the CUDA image before passing it to OpenCV function.

Thanks so much for the timely reply.
I need to develop a large program. Is there a resource that I can use for all such commands that I need to feed variables from cuda into opencv or the inverse?

You can refer to these sections of the docs and samples:

Thank you. The shared resources are very helpful!

I am developing a program that can measure car speeds accurately in real time.
I need to use sparce optical flow, corner detection, edge detection and other functions in open cv in real time.
I decided to use the nvidia jetson hardware and software.
I want to use your 10 lines code tutorial as the starting point and use opencv functions and do the processing as fast as possible. Am I following a meaningful path? How can the processing be as fast as possible on jetson platform?

Thanks so much for the help.

Sure, I would recommend looking into using the CUDA-accelerated version of OpenCV for those algorithms, or check out NVIDIA VPI (Vision Programming Interface) which is fast and has Python bindings.

For OpenCV + CUDA, you can use the latest l4t-ml container (for JetPack 4.6) which has it pre-built, or this script to build it yourself:

Thank you for your response. Where can I find sources for CUDA-accelerated opencv and Nvidia VPI? I need to use opencv significantly. I need documentation to run the commands.

I need to load the resnet10.caffemodel model (it has 4 classes: car, bicycle, person, roadsign from deepstream package which is a good quality network for detecting cars and people) into the detectnet-camera.py python file (located at /home/nx2/jetson-inference/build/aarch64/bin). So that I can feed the frame and detected box of the cars into opencv functions. Bit it gives me TensorRT error. This is my command line code and error. How do I fix this problem?

python3 detectnet-console.py --camera=/dev/video0 --model=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel --prototxt=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.prototxt --class_labels=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/labels.txt

detectNet – loading detection network model from:
– prototxt /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.prototxt
– model /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel
– input_blob ‘data’
– output_cvg ‘coverage’
– output_bbox ‘bboxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins

[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel.1.1.7103.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.prototxt /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel
[TRT] failed to retrieve tensor for Output “coverage”
Segmentation fault (core dumped)

Thanks so much

The VPI documentation is here: https://docs.nvidia.com/vpi/index.html
The OpenCV documentation is done by OpenCV, try searching for “openCV CUDA python”

You need to specify the correct names of the layers for your model using --input_blob and --output_blob

I changed the name of the input and output layer using the resnet10.prototxt file. The name of the very first layer is input_1 and the name of the very last layer is “conv2d_cov/Sigmoid”. The network some times runs and some times fails. When it runs, nothing is detected.
The location of the labels and the network files on the jetson is:
" /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector"

Why nothing is detected? How can I solve this problem.
Many thanks.

I haven’t tried this model before - you would want to check that the pre/post-processing matches what is done in imageNet.cpp. For example, the mean pixel subtraction coefficients and any normalization that is supposed to be applied.

I don’t know how to do this or this will work or not.
Deepstream has python bindings. I need to use a USB camera. I found the deepstream-test1-usbcam app in python.
Is there a way I can get every frame in this app and apply opencv functions to the frame ? Please advise.

Thanks so much

There should be, but please open a new topic about it as I’m not an expert on DeepStream. You may want to try the DeepStream SDK forum.