How to change pcie speed

Hello,

I am searching how to increase max speed of the pcie on jetson xavier.
Their are limited at 2,5 GT/s.
Xavier seems to be able to grow up to 8 GT/s.
I am using Jetpack 4.5 and i don’t find where how i can change settings to grow up the pcie bus speed.

0004:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad1 (rev a1) (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 8GT/s, Width x1, ASPM not supported, Exit Latency L0s <1us, L1 <64us
		LnkSta:	Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-

Any help would be appreciated.

Regards

Jetson Xavier is actually Gen-4 speed (i.e. 16 GT/s) capable and it is the default setting (i.e. when a Gen-4 speed capable device is connected, link does come up at Gen-4 speed), otherwise, the link speed is decided by what is connected to the root port. In this particular case, what is connected to the root port and what speed is that device capable of? could you please share ‘sudo lspci -vv’ output?

I have nothing connected on the board.
lspci.rtf (4.4 KB)

I just copy/paste the pcieport description.
The only other one display is for the network.

edit:
I could change speed with this script : pcie_set_speed.sh.
But i would like to know if it’s possible to change it in more deep level to have not to execute this script each time.

You don’t have to execute any script. With nothing connected, there’s nothing to negotiate with so the link speed stays at 2.5GT/s. If you attach a device capable of 8GT/s, you’ll see the speed adjust accordingly.

Here’s a snippet of an NVMe device that’s running at 8GT/s x4…

0005:01:00.0 Non-Volatile memory controller: Micron/Crucial Technology Device 540a (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Micron/Crucial Technology Device 540a
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 35
        IOMMU group: 61
        Region 0: Memory at 1f40000000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [80] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                LnkCap: Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s (ok), Width x4 (ok)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

And here’s the bridge it’s connected to…

0005:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad0 (rev a1) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 35
        IOMMU group: 60
        Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0
        I/O behind bridge: 0000f000-00000fff [disabled]
        Memory behind bridge: 40000000-400fffff [size=1M]
        Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
        Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
        BridgeCtl: Parity- SERR- NoISA- VGA- VGA16- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] Express (v2) Root Port (Slot-), MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0
                        ExtTag- RBE+
                DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 16GT/s, Width x8, ASPM not supported
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt-
                LnkSta: Speed 8GT/s (downgraded), Width x4 (downgraded)
                        TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt+

The bus is shared with GPU !
And unlock is speed allow to run some crytpomining process witch trade on it.
I did it.
Execute the pcie_set_speed script and reboot and the mining process is able to access to the 4GB it need to run.
Don’t do it and the process is not able to do anything.

What GPU and what bus are you talking about? The onboard GPU doesn’t use PCIe. Do you have an external GPU and are trying to run the cryptomining process on that? If so, what does the lspci output look like for that?

I talk about the GPU onboarded.
When i start miner, this one check a memory space based on the pcie.
If i change nothing, on a xavier nx, i can’t mining anything cause i get the message :

cuda-0   Using Pci Id : 00:00.0 Xavier (Compute 7.2) Memory : 2.5 GB

The process say that it require at least 4.2 GB to be able to generate DAG.

If i change the speed of pcie, and run the mining process, i get the following message :

cuda-0   Using Pci Id : 00:00.0 Xavier (Compute 7.2) Memory : 6.19 GB

and the mining process succeed to run cause this time it have enough memory to generate DAG.

So, in one way or another, their are a link with the pcie speed to be able to run mining process on this card.

And my question is how i can configure the pcie speed link to the max without call a script ?
No matter the purpose.

Now I understand, sorry. The miner just assumes that the pcie bus is being used to access the GPU and fails.
I haven’t tried this but there’s a device tree entry named “nvidia,init-speed” which you could try adding to the pcie devices with a device tree overlay. Something like this might work…

	pcie@14160000 {
		nvidia,init-speed = <3>;
	};
	pcie@141a0000 {
		nvidia,init-speed = <4>;
	};

@gtj This is exactly what i search to change. ^^
See it in a thread about TX2.
But i don’t understand where this can be changed.

edit:
i must change it in a custom rules under /etc/udev/rules.d/?

The method I outlined involves creating a new dtb that’s loaded with the kernel at boot time. You said you didn’t want to have to run a script every time but did you mean run it manually? The easiest way to accomplish this task is to just run that pcie_set_speed.sh script automatically at startup. You can do that easily with a systemd service…

Save the following to /etc/systemd/system/pcie_set_speed.service

[Unit]
Description=Set PCIe Speed

[Service]
Type=oneshot
ExecStart=/root/pcie_set_speed.sh

[Install]
WantedBy=sysinit.target

Then copy the pcie_set_speed.sh script to /root/ and make sure it’s executable.
Now run

$ sudo systemctl daemon-reload
$ sudo systemctl enable pcie_set_speed
$ sudo systemctl start pcie_set_speed

Now whenever the device boots the script will run automatically.

I guess finally this is the better way.
Thank you for your time !

For information, when we ask to miner to display device usable on the jetson we get this result

 Id Pci Id    Type Name                          CUDA SM   Total Memory 
--- --------- ---- ----------------------------- ---- ---  ------------ 
  0 00:00.0   Gpu  Xavier                        Yes  7.2       6.59 GB 

We investigate to find why we don’t have the theorical hash rate of 20 MH/s than we should. In practice the hash rate is around 150 KH/s.
IDK but maybe the problem is link to the pcie associated by the miner while the gpu is not really connected on.

Any idea is welcome ^^

@daveau1 Was there any solutions to the hashrate deficit from the theoretical? Also where did you get the theoretical rate of 20 MH/s?

I’m getting error on running script

setpci: 0000:07:00.0: Instance #0 of Capability 0010 not found - there are no capabilities with that id.
pcie_set_speed.sh: 21: arithmetic expression: expecting primary: “(“0x” & 0xF0) >> 4”

Any ideas on how to fix this? PCIe device is NVMe drive, probably CAP_EXP+02.W is not valid for it and should be changed…

Hi mike.voronkov,

Please help to open a new topic if it’s still an issue to support.

Thanks