sharing of operating system resources

Hi All ,
Greeting to all of you … :)

iam very new to CUDA and have some questions for which i need help…

  1. What about operating system resources like sockets and file descriptors ? Can sockets be shared between an operating system process ( host code ) and device code ?
    Basically , i want the operating system process to receive incoming requests and want the cuda code to send packets both using the same socket . Could this be possible?
  2. Are the kernel calls ( func<<<threads,grid>>> )blocking or non blocking ?
  3. what about other calls like cudamemcpy ,etc ? are they blocking or non blocking ?

Any help will be highly appreciated …

Warm Regards

  1. The GPU has its own independent address space, and the only I/O which can be done (and mostly only from host code) is copying across the PCI-e bus from host CPU memory. No operating system API calls can be used on the device.
  2. Non blocking
  3. Blocking by default , but there are two forms of non-blocking copy functionality available for asynchronous memory transfer on hardware which supports it.