I have a simple question, I totally understand how GPU process the image, but what I don’t understand is, what handles the pixel mapping ? meaning what library or function that handles the distribution of the pixels on the screen ?
Pixels are simply data stored in GPU memory. Dedicated hardware reads out this data and converts it into video signals (digital these days, analog in the past) that are delivered to a display device.
Thank you for the reply ! but what I’m looking for is deeper than this, for instance, if you opened an image using Windows photo viewer or any app that uses the GPU, this image goes through multiple processes before it gets displayed on the screen. in my view I believe it first goes through the GDI API which is Windows graphics API then it went to the driver then it goes to the CUDA or OpenCL API which gets translated and displayed on the screen. my question is what part of the GPU that is dedicated to assign the pixels to the screen ?
Please define “assign the pixels to the screen”. Are you referring to the window manager of a GUI?
I don’t believe your question is related to CUDA, so it is misplaced here.
CUDA is orthogonal to graphics, and knows nothing about pixels, much less their assignment.
GPUs had no trouble delivering pixels to the display long before CUDA was invented. It is not part of the display pipeline.
One of the better descriptions of a modern display pipeline is here:
The pixel, as an entity, makes its first appearance in step 12 described there.
Don’t be confused: the vertex processing and pixel processing being done there uses the same hardware that CUDA uses but it is not CUDA. It is part of the GPU driver, and there is no underlying “CUDA code” embedded in the graphics pipeline to process pixels.
THANK YOU !!! I was way off the grid !
txbob owe you big time !
So according to this :
the wrap scheduler assign the pixel coordinates on the screen. and its more explained in this blog
my question is does the wrap scheduler follow a standard formation ? meaning if I passed a 100x100 image, the assigned coordinates are limited to a specified number of patterns? because if every pixel is considered in the process, then efficiency wise the GPU won’t be as efficient to consider every possible pixel to process the image. so I’m thinking there are default patterns that GPU uses to pass images , and maybe Im wrong…please help