NVCUVENC Encoding from OpenGL FBO

I am pretty new to CUDA in general and I found NVCUVENC lib which allows H.264 video encoding on GPU.Also from the documentation I understood that it should be possible to use OpenGL textures as the source data for the encoder.Trying to see how it is done I found only example on video decoding and sending it to OpenGL as texture for rendering.I need the opposite: grab the frame in texture attachment from the FBO and pass it to the encoder to process the video.I have a couple of fundamental questions:
I read in the docs " The inputs are YUV frames, and outputs are generated NAL packets " .That means I have to convert each frame I am getting from OpenGL from RGB to YUV ?
Also , I inspected NVCUVENC C API video encoding demo ,and have found no .cu source file? Where is that?Shouldn’t I have .cu somewhere in the project to compile kernels etc ?

Anyone ?