https://docs.nvidia.com/confidential-computing-deployment-guide.pdf shows an example of setting the CC mode to devtools
.
Can I use CC mode = on ?
https://docs.nvidia.com/confidential-computing-deployment-guide.pdf shows an example of setting the CC mode to devtools
.
Can I use CC mode = on ?
Yes, you can use this mode to operate in full encryption mode rather than the devtools mode, which will open up access to the performance counters required by the profilers.
I am testing the confidential computing on H800 Pcie. The VBIOS version is 96.00.61.00.0B.
I set the CC mode to “On” but the driver is not working properly. The output:
[ 4.597902] NVRM nvAssertOkFailedNoLog: Assertion failed: Invalid data passed [NV_ERR_INVALID_DATA] (0x00000025) returned from pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMMGR_M
EMORY_TRANSFER_WITH_GSP, &gspParams, sizeof(gspParams)) @ mem_utils.c:247
[ 4.597988] NVRM nvAssertOkFailedNoLog: Assertion failed: Invalid data passed [NV_ERR_INVALID_DATA] (0x00000025) returned from _memmgrMemReadOrWriteWithGsp(pGpu, pDstInfo, pBuf, size, NV_FALSE ) @ mem_utils.c:711
[ 4.597991] NVRM nvAssertFailedNoLog: Assertion failed: status == NV_OK @ mem_mgr.c:445
[ 4.597997] NVRM nvAssertOkFailedNoLog: Assertion failed: Invalid data passed [NV_ERR_INVALID_DATA] (0x00000025) returned from memmgrVerifyGspDmaOps(pGpu, GPU_GET_MEMORY_MANAGER(pGpu)) @ kern_bus_gm107.c:355
[ 4.598001] NVRM RmInitNvDevice: *** Cannot initialize the device
[ 4.605956] NVRM RmInitAdapter: RmInitNvDevice failed, bailing out of RmInitAdapter
[ 4.606757] NVOC: __nvoc_objDelete: Child class Spdm not freed from parent class ConfidentialCompute.NVRM rmapiReportLeakedDevices: Device object leak: (0xc1e00002, 0xcaf00000). Please file a bug against RM-core.
[ 4.609665] NVRM nvAssertFailedNoLog: Assertion failed: 0 @ rmapi.c:760
[ 4.610350] NVRM nvAssertFailedNoLog: Assertion failed: GPU_GET_KERNEL_GSP(pGpu) != NULL @ gpu.c:4885
[ 4.611288] NVRM nvAssertFailedNoLog: Assertion failed: pRpc != NULL @ subdevice.c:215
[ 4.612106] NVRM nvAssertFailedNoLog: Assertion failed: GPU_GET_KERNEL_GSP(pGpu) != NULL @ gpu.c:4885
[ 4.613048] NVRM nvAssertFailedNoLog: Assertion failed: pRpc != NULL @ device.c:233
[ 4.613865] NVOC: __nvoc_objDelete: Child class PrereqTracker not freed from parent class OBJGPU.NVRM: GPU 0000:00:05.0: RmInitAdapter failed! (0x24:0x25:909)
[ 4.616325] NVRM: GPU 0000:00:05.0: rm_init_adapter failed, device minor number 0
[ 4.637577] NVRM gpumgrCheckRmFirmwarePolicy: Disabling GSP offload -- GPU not supported
[ 4.638447] NVRM osInitNvMapping: *** Cannot attach gpu
[ 4.638966] NVRM RmInitAdapter: osInitNvMapping failed, bailing out of RmInitAdapter
[ 4.639743] NVRM: GPU 0000:00:05.0: RmInitAdapter failed! (0x22:0x56:631)
[ 4.641427] NVRM: GPU 0000:00:05.0: rm_init_adapter failed, device minor number 0
[ 4.715416] nvidia-uvm: Loaded the UVM driver, major device number 240.
[ 4.717887] nv-hostengine (910) used greatest stack depth: 10848 bytes left
[ 5.102597] NVRM gpumgrCheckRmFirmwarePolicy: Disabling GSP offload -- GPU not supported
[ 5.103575] NVRM osInitNvMapping: *** Cannot attach gpu
How to fix this problem?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.