Did anyone find a solution to this? Trying hard to make things work here as described, but I’m not yet able to do it.
jetson.utils.cudaToNumpy not working as expected · Issue #356 · dusty-nv/jetson-inference · GitHub this seems to work
No it doesn’t. Still throws the error.
Not related to pyCapsule, but wrt to initial problem, you may try this patch. You would crop your region of interest with:
nvarguscamerasrc ! nvvidconv top=ROIymin bottom=ROIymax left=ROIxmin right=ROIxmax ! video/x-raw, format=BGRx, width=640, height=480
Any update about this issue?
I had a buffer is of type PyCapsule. tried to read it using the instruction:
jetson.utils.cudaToNumpy(buffer)
but got this error:
Exception: jetson.utils – cudaToNumpy() failed to get input CUDA pointer from first arg (should be cudaImage or cudaMemory)
How to read this buffer?
Pretty old topic for me since I last took a look at it but if I remember well, you can access to each of the PyCapsule objet property by doing like myimage = myCypsule.image ? Check : jetson-inference/detectnet.py at master · dusty-nv/jetson-inference · GitHub
Earlier this year, the code was updated so that you can access cudaImage capsule objects directly from Python or via numpy:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#image-capsules-in-python
The capsule object needs to have been allocated by jetson.inference/jetson.utils. Otherwise you can convert it to one of these GPU capsules from a numpy array using jetson.utils.cudaFromNumpy()