Memory allocate problem of cudaFromNumpy, using with opencv

HW : Jetson NX devkit
purpose : run object detection example code with remote camera (need to use openCV)

here is my code modified from example code.


import cv2
import jetson.inference
import jetson.utils
import numpy as np

#camera = cv2.VideoCapture(‘http://10.42.0.80:8090’) #remote cam url
camera = cv2.VideoCapture(0)
net = jetson.inference.detectNet(“ssd-mobilenet-v2”, threshold=0.5)

_,img = camera.read()
width = np.shape(img)[1]
height = np.shape(img)[0]
print("\n debug 1\n")
input_img = jetson.utils.cudaFromNumpy(img)
print("\n debug 2\n")
detections = net.Detect(input_img,width,height)
print("detections : ",detections)


In my opinion,
The problem seem to be in this line.

input_img = jetson.utils.cudaFromNumpy(img)

The error below occurs.


Cuda Error in allocateContextResources: 700 (an illegal memory access was encountered)

If the “jetson.utils.cudaFromNumpy” command is run correctly,
python script need to show the following result

t3

I think an error occurred because the cuda memory could not
be free due to an unknown problem.

How can I solve this problem?

Full error code of above code


debug 1

jetson.utils – cudaFromNumpy() ndarray dim 0 = 480
jetson.utils – cudaFromNumpy() ndarray dim 1 = 640
jetson.utils – cudaFromNumpy() ndarray dim 2 = 3

debug 2

[TRT] …/rtSafe/cuda/caskConvolutionRunner.cpp (317) - Cuda Error in allocateContextResources: 700 (an illegal memory access was encountered)
[TRT] FAILED_EXECUTION: std::exception
[TRT] detectNet::Detect() – failed to execute TensorRT context
Traceback (most recent call last):
File “test3.py”, line 16, in
detections = net.Detect(input_img,width,height)
Exception: jetson.inference – detectNet.Detect() encountered an error classifying the image
PyTensorNet_Dealloc()
[cuda] cudaFreeHost(mDetectionSets[0])
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] /home/user/jetson-inference/c/detectNet.cpp:66
[cuda] cudaFreeHost(mClassColors[0])
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] /home/user/jetson-inference/c/detectNet.cpp:74
gridAnchorPlugin.cpp, 162, 700
gridAnchorPlugin.cpp, 163, 700
gridAnchorPlugin.cpp, 162, 700
gridAnchorPlugin.cpp, 163, 700
gridAnchorPlugin.cpp, 162, 700
gridAnchorPlugin.cpp, 163, 700
gridAnchorPlugin.cpp, 162, 700
gridAnchorPlugin.cpp, 163, 700
gridAnchorPlugin.cpp, 162, 700
gridAnchorPlugin.cpp, 163, 700
gridAnchorPlugin.cpp, 162, 700
gridAnchorPlugin.cpp, 163, 700
gridAnchorPlugin.cpp, 166, 700
gridAnchorPlugin.cpp, 167, 700
gridAnchorPlugin.cpp, 168, 700
[TRT] …/rtExt/cuda/cudaFusedConvActRunner.cpp (90) - Cuda Error in destroyFilterTexture: 700 (an illegal memory access was encountered)
[TRT] INTERNAL_ERROR: std::exception
[TRT] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)


I just solved it myself : )

Image obtain from openCV needs two convert process
if you want to use it for jetson inference net…

1st

img = cv2.cvtColor(img_from_cam, cv2.COLOR_BGR2RGBA)

2nd

cuda_img = jetson.utils.cudaFromNumpy(img)

reference.

From the nano thread it looks like using cv2 is a workaround until jetson inference supports video stream directly, not just disk files.

Maybe more options would exist if cv2 were compiled for cuda, but idk.

1 Like