I’m doing feature extraction for image with 640x480 size by OpenCV cuda on TX2.
It takes ~600M mem once I init gpu usage with code:
cv::Mat cpu_img = cv::Mat::zeros(640, 480, cv::8UC1);
cv::cuda::GpuMat gpu_img;
gpu_img.upload(cpu_img);
Besides, two thread gpu programs metioned above take ~600M X 2 memory resource.
I don’t think this task need so much mem resource.
So, my question is:
- How can I set gpu mem usage to a proper value which just enough for the task ?
- Can multi gpu thread share the same memory?
Thank you!