We are currently trying to see if it is feasible to built a GPU based endoscope. The sensing part includes a 1080p 60fps CMOS mipi CSI-2 camera. We have to run a set of custom image processing applications for image enhancement and push it to the HDMI interface for display. This whole set of operations needs to be done in under 1 frame latency ie, lees than 16.6 ms. To get an understanding of the time taken by the image processing algorithms we ran the implementation done using openCV C++ libraries and executed in a i7 4510 quadcore processor running at 2 Ghz running ubuntu 14.04. The algorithm on an average took around 300 milliseconds per image. Given below are the timing details of the different aspects of the algorithm for one iteration.
Luminance time :0.0195284 seconds
Log Luminance time :0.00767123 seconds
bilateral filter time :0.245126 seconds
antilog,base img time :0.0022362 seconds
Detail Image time :0.00572883 seconds
Enhancement time :0.00536 seconds
combined img time :0.0041545 seconds
getting coefficients time :0.00538347 seconds
getting output :0.0200575 seconds
total time :0.315247 seconds
We were looking at the option of using TX1 for this application, basically because it supports the camera and display interfaces we require. Before we go ahead and buy the eval board and try this, i wanted to check if this is indeed a feasible option.
I am a newbie in GPUs and we usually go for an FPGA based implementation for such cases. I was looking at the capabilities of these new GPUs and was wondering why not used them instead.
Any help will be highly appreciable.