Chat with RTX - Cuda runtime error in cudaDeviceGetDefaultMemPool

Anyone found a solution to this?

I’m running on l40 24q.

RuntimeError: [TensorRT-LLM][ERROR] CUDA runtime error in cudaDeviceGetDefaultMemPool(&memPool, device): operation not supported (C:\Users\tejaswinp\workspace\tekit\cpp\tensorrt_llm\runtime\bufferManager.cpp:171)

Same problem here,

Running a L40s on Vmware.

this looks like the same issue, and with a possible solution:
github case

what should be the correct parameters?