Can I use STL inside kernels?

Can I use STL containers inside kernels (of course in emulation mode)?
I tried using std::map, but there were strange errors:

1>c:/Users/HP/Documents/Visual Studio 2008/Projects/CUDA/CUDAFiles/ : error C2065: ‘_ZN9CUDATools20m_mapAllocatedMemoryE’ : undeclared identifier
1>c:/Users/HP/Documents/Visual Studio 2008/Projects/CUDA/CUDAFiles/ : error C2228: left of ‘.__b_St5_TreeISt12_Tmap_traitsIPviSt4lessIS1_ESaISt4pair
IKS1_iEELb0EEE’ must have class/struct/union
1> type is ‘‘unknown-type’’

Is there some special configuration needed?
When I include files containing only declarations of STL containers, compilation works OK. But I can’t use these containers.

I don’t believe so. If you have host side code which uses a lot of stl container types, then one way to use them with CUDA is via the thrust library, which provides a lot of useful C++ host side CUDA functionality in the style of the STL.

I think that this library is not for me. It is used from the host code to perform some algorithms on GPU. I want something else - use STL containers inside kernel code…

Even if this was possible it would not be efficient on current hardware. You have to live with simple arrays for now ;)

I have written that I only want STL for emulation mode - for helping with debugging. I don’t need speed, but convenience.

u can use any regular c++ libraries and code in emulation mode. but if you want to run it on the gpu then you can’t. the way around that is to use ifdef statements

The problem I can’t ! See in my first post that I get strange compilation errors.

Emulation mode is not terribly well supported, and tmurray has regularly stated that he wants to see it dropped (meanwhile, a number of us would like to see ‘emulation’ mode become a proper multicore x86 backend). I’ve found that even just can confuse the emulator, so more advanced things could certainly cause more trouble.