Deadlock in 319.32, linux kernel 3.9.2, NVML and render threads

Hi,
I have a deadlocked machine at the moment.
An application is doing simultaneous NVML monitoring and direct graphical rendering.
The NVML thread and the rendering thread have status ‘D’, meaning uninterruptible disk-wait.
I have logged in using ssh, and generated sysrq-w command, grabbed the message log, and resolved the kernel stack traces of the two threads involved, against symbols in the kernel map file and the symbols obtained by running objdump -t on the nvidia.ko object.

Firstly, the NVML thread:
./ktrace 2057
Level 0 kernel : gup_huge_pud + 0x81
Level 1 kernel : __getnstimeofday + 0x32
Level 2 kernel : schedule + 0x1f
Level 3 kernel : schedule_timeout + 0xf5
Level 4 kernel : getnstimeofday + 0xb
Level 5 kernel : getnstimeofday + 0xb
Level 6 kernel : do_gettimeofday + 0x11
Level 7 kernel : __down + 0x54
Level 8 kernel : down + 0x3c
Level 9 nvidia : os_acquire_mutex + 0x38
Level 10 nvidia : _nv012273rm + 0x1c
Level 11 nvidia : _nv013826rm + 0x5f
Level 12 nvidia : _nv000693rm + 0x7a
Level 13 nvidia : _nv000778rm + 0x2e4
Level 14 nvidia : rm_ioctl + 0x4e
Level 15 kernel : _copy_from_user + 0x37
Level 16 nvidia : nv_kern_ioctl + 0x101
Level 17 nvidia : nv_kern_unlocked_ioctl + 0x0
Level 18 nvidia : nv_kern_unlocked_ioctl + 0x1b
Level 19 kernel : vfs_ioctl + 0x31
Level 20 kernel : do_vfs_ioctl + 0x8a
Level 21 kernel : irq_exit + 0x45
Level 22 kernel : smp_apic_timer_interrupt + 0x5b
Level 23 kernel : apic_timer_interrupt + 0x2d
Level 24 kernel : fget_light + 0x85
Level 25 kernel : sys_ioctl + 0x3c
Level 26 kernel : sysenter_do_call + 0x12
Level 27 kernel : detect_ht + 0xe0

And now the render thread:
./ktrace 2073
Level 0 kernel : ttwu_do_activate + 0x3f
Level 1 kernel : unix_write_space + 0x62
Level 2 kernel : unix_destruct_scm + 0x84
Level 3 kernel : skb_free_head + 0x42
Level 4 kernel : skb_release_data + 0x61
Level 5 kernel : schedule + 0x1f
Level 6 kernel : schedule_timeout + 0xf5
Level 7 kernel : __wake_up_sync_key + 0x47
Level 8 kernel : __getnstimeofday + 0x32
Level 9 kernel : __down + 0x54
Level 10 kernel : down + 0x3c
Level 11 nvidia : os_acquire_mutex + 0x38
Level 12 nvidia : _nv012273rm + 0x1c
Level 13 nvidia : _nv013826rm + 0x5f
Level 14 nvidia : _nv013933rm + 0x8
Level 15 nvidia : _nv000778rm + 0x740
Level 16 nvidia : rm_ioctl + 0x4e
Level 17 kernel : _copy_from_user + 0x37
Level 18 nvidia : nv_kern_ioctl + 0x101
Level 19 nvidia : nv_kern_unlocked_ioctl + 0x0
Level 20 nvidia : nv_kern_unlocked_ioctl + 0x1b
Level 21 kernel : vfs_ioctl + 0x31
Level 22 kernel : do_vfs_ioctl + 0x8a
Level 23 kernel : timespec_add_safe + 0x33
Level 24 kernel : poll_select_set_timeout + 0x73
Level 25 kernel : fget_light + 0x85
Level 26 kernel : sys_ioctl + 0x3c
Level 27 kernel : sysenter_do_call + 0x12
Level 28 kernel : handle_irq + 0x32
Level 29 kernel : detect_ht + 0xe0

In the above outputs, the level number is the stack call level, the second name is either “kernel” or “nvidia”, indicating a symbol from the linux kernel or offset within the nvidia module. After the ‘:’ is a symbol, followed by an offset, generally indicating the return address from a function call.

Can this problem be avoided by making graphical and nvml calls mutually exclusive at the userland level?
The EDID i2c channel is also being read every 2 seconds or so, which may be having an impact.

Nvidia driver version is 319.32.
Linux kernel is 3.9.2
nvidia graphics is 430M.
Intel Dual-core T3100 celeron cpu.