Not possible to use A100 together with 4090

Hello!
I’m having a setup consisting of RTX 4090 as device 0 and A100 as device 1. The driver is 572.60. Some different drivers also were tried with the same result.
In device manager A100 is shown as “This device is working properly.”.
Then in my C++ software I’m setting device to 1 and trying to create a trt engine out of a .onnx file. The problem is for whatever .onnx so no sense to provide a concrete model here.
Then I’m getting the following error
class Logger!ERROR!:Logger::log:[virtualMemoryBuffer.cpp::nvinfer1::StdVirtualMemoryBufferImpl::resizePhysical::168] Error Code 1: Cuda Driver (invalid device ordinal)
And this is happening at the stage when nvinfer1::IBuilder* builder is calling
builder->buildSerializedNetwork(*network, *config);
If to install a driver specific for Tesla A100 then creating trt engine is working OK. But 4090 in device manager is marked as not working properly.
Cuda versions tried - 11.8 and 12.2. TensorRT versions tried are 8.5.2.2 and 10.5.0.18
Could you tell what can be the problem here?
Thank you!