cudaDeviceEnablePeerAccess : buffer object could not be mapped

I am trying to run the simpleP2P code sample from Nvidia Code sample site, and am running into an execution error:

simpleP2P.cu(154) : cudaSafeCall() Runtime API error 14: mapping of buffer object failed

The above points to the following line, which enables peer access on GPU#1 -

cutilSafeCall(cudaDeviceEnablePeerAccess(gpuid_tesla[1], gpuid_tesla[0]));

Following is the result of deviceQuery; well this program works fine on another machine with similar configuration, but a mismatch in the driver version (Machine it doesn’t work - 275.09.04 ---- Machine it works - 270.35).

Device 1: "Tesla M2050"

  CUDA Driver Version / Runtime Version          4.0 / 4.0

  CUDA Capability Major/Minor version number:    2.0

  Total amount of global memory:                 2687 MBytes (2817982464 bytes)

  (14) Multiprocessors x (32) CUDA Cores/MP:     448 CUDA Cores

  GPU Clock Speed:                               1.15 GHz

  Memory Clock rate:                             1546.00 Mhz

  Memory Bus Width:                              384-bit

  L2 Cache Size:                                 786432 bytes

  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)

  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048

  Total amount of constant memory:               65536 bytes

  Total amount of shared memory per block:       49152 bytes

  Total number of registers available per block: 32768

  Warp size:                                     32

  Maximum number of threads per block:           1024

  Maximum sizes of each dimension of a block:    1024 x 1024 x 64

  Maximum sizes of each dimension of a grid:     65535 x 65535 x 65535

  Maximum memory pitch:                          2147483647 bytes

  Texture alignment:                             512 bytes

  Concurrent copy and execution:                 Yes with 2 copy engine(s)

  Run time limit on kernels:                     No

  Integrated GPU sharing Host Memory:            No

  Support host page-locked memory mapping:       Yes

  Concurrent kernel execution:                   Yes

  Alignment requirement for Surfaces:            Yes

  Device has ECC support enabled:                Yes

  Device is using TCC driver mode:               No

  Device supports Unified Addressing (UVA):      Yes

  Device PCI Bus ID / PCI location ID:           6 / 0

  Compute Mode:

     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

What could be the reason?