Does the Nvidia Xavier support NVME GEN4 SSD?

I’m also looking into this.
I’m not so sure it would work with devkit carrier board, as looking at ‘Table 2-14.M.2 Key M Maximum Trace Delays – PCIe up to Gen3’. Someone from NVIDIA may tell more about this.

However, be aware that gen4 SDD may generate non negligible heat, and it could require a thermal solution.

If M2-Key M port of carrier board cannot use gen4, of if the thermal impact cannot be managed by current thermal design, you may have to use a gen4 adapter for PCIe connector. So far, I’ve only seen this one from Gigabyte, but unsure it would work on Xavier, it may have some dependencies to AORUS motherboard, or BIOS, or Windows, or x86/AMD64 processor… I’ve contacted Gigabyte support for this, now waiting for their answer.

In short, I’d think this would be a risky investment for now…

[EDIT: Also seen this one looking simpler and cheaper]

The Xavier NX’s PCIe x4 controller is used for NVMe on the devkit, and that controller supports up to PCIe gen4. However I also only have a PCIe gen3 NVME module personally - I will have to check with the hardware team regarding the trace delays on the devkit that HP mentioned above.

1 Like

Following up on this - in the JetPack 4.4 DP release, the PCIe x4 controller used for NVMe is currently at Gen3, but in the 4.4 GA release we plan to enable Gen4.

As @Honey_Patouceul alluded to, the Gen4 NVMe sticks we’ve found so far are double-sided, which would make for an awkward fit mechanically. So if you find a single-sided one, it would be preferred.

Thanks was thinking of slapping on Samsung 980 Pro, good to know it will be supported in the upcoming Jetpack. Now if only the kernel support booting from it directly. Cheers!

1 Like

Hi @ArtofWar, we do plan to add support for booting from NVMe in a future JetPack release. For now, there is an ongoing discussion about having the rootfs on NVMe in this thread:

@ArtofWar,

I’d be happy that someone tries it out, but be aware that this could draw and dissipate about 7W, so there might be a non-negligible risk that it throttles back to gen3 or less depending on how the heat is managed by your current thermal solution.

it seems that 4.4 has been released
has the gen4 support been enabled in it?
we got one gen4 m.2 for testing
Thanks

Not tried, but I think it is enabled since 4.4DP.
If I correctly understand, max speed would be gen4 and 8 lanes:

xxd /proc/device-tree/pcie@14180000/nvidia,max-speed
00000000: 0000 0004
xxd /proc/device-tree/pcie@14180000/num-lanes
00000000: 0000 0008                  

Let us know !

trying with 500mb sabrient rocket gen 4.
using NX Jetson
Attempt 1 is being made on DP_4.4. release
Concerns are to see if it is useable and to monitor the temperature to avoid the overheating issue.
Considerin using xseensors package.
the setup is remote.
from remote engineer:
“after installing it it’s not completely flat and when the screws are in the drive has to be bent a little but still fits”
" bent cause the other side also has stuff that prevent it attach flat to the board"


lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT,SIZE,MODEL
NAME         FSTYPE LABEL      MOUNTPOINT   SIZE MODEL
loop0        vfat   L4T-README               16M 
mtdblock0                                    32M 
mmcblk0                                    59.6G 
├─mmcblk0p1  ext4              /           29.5G 
├─mmcblk0p2                                  64M 
├─mmcblk0p3                                  64M 
├─mmcblk0p4                                 448K 
├─mmcblk0p5                                 448K 
├─mmcblk0p6                                  63M 
├─mmcblk0p7                                 512K 
├─mmcblk0p8                                 256K 
├─mmcblk0p9                                 256K 
├─mmcblk0p10                                100M 
└─mmcblk0p11                                 18K 
zram0                          [SWAP]       1.9G 
zram1                          [SWAP]       1.9G 
nvme0n1                                   465.8G Sabrent Rocket 4.0 500GB

how to see if it is on gen4 mode?

sudo hdparm -tT --direct /dev/nvme0n1

/dev/nvme0n1:
 Timing O_DIRECT cached reads:   3996 MB in  2.00 seconds = 2000.08 MB/sec
 Timing O_DIRECT disk reads: 6184 MB in  3.00 seconds = 2060.83 MB/sec

updating from DP4.4 to GA4.4 with
sudo apt update -y && sudo apt upgrade -y

 sudo apt install nvidia-jetpack
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 nvidia-jetpack : Depends: nvidia-cudnn8 (= 4.4-b186) but it is not going to be installed
                  Depends: nvidia-container (= 4.4-b186) but it is not going to be installed
                  Depends: nvidia-vpi (= 4.4-b186) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

any other ideas how to determine the gen mode used?
sudo apt install nvme-cli -y

 sudo nvme smart-log /dev/nvme0 | grep '^temperature'
temperature                         : 40 C
sudo lspci -vv -d 1987:5016
0005:01:00.0 Non-Volatile memory controller: Device 1987:5016 (rev 01) (prog-if 02 [NVM Express])
	Subsystem: Device 1987:5016
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 35
	Region 0: Memory at 1f40000000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 unlimited
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
			 EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
	Capabilities: [d0] MSI-X: Enable+ Count=9 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [e0] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [110 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=300us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=300us
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] #25
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap- CGenEn- ChkCap+ ChkEn-
	Capabilities: [300 v1] #19
	Capabilities: [340 v1] #26
	Capabilities: [378 v1] #27
	Kernel driver in use: nvme
  • LnkCap: Port #1, Speed 16GT/s, Width x4
    16GT/s is gen. 4, and this shows the system knows it is capable of gen. 4.

  • LnkSta: Speed 8GT/s, Width x4
    The “status” is that it is running at gen. 3.

  • AERCap: First Error Pointer: 00
    No errors have been encountered.

It is possible something in software told the unit to throttle back to gen. 3 for heat reasons, but I doubt it. Gen. 4 speeds are exceedingly difficult to use without perfect design, and link training implies falling back to gen. 3 due to some imperfection in traces/signal quality are likely. The nice thing is that at gen. 3 the link seems to be running perfectly.

reflashed NX with sdkmanager


/dev/nvme0n1:
 Timing O_DIRECT cached reads:   4668 MB in  2.00 seconds = 2339.03 MB/sec
 Timing O_DIRECT disk reads: 7126 MB in  3.00 seconds = 2374.80 MB/sec
sudo lspci -vv -d 1987:5016
0005:01:00.0 Non-Volatile memory controller: Device 1987:5016 (rev 01) (prog-if 02 [NVM Express])
	Subsystem: Device 1987:5016
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 35
	Region 0: Memory at 1f40000000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 unlimited
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
			 EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
	Capabilities: [d0] MSI-X: Enable+ Count=9 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [e0] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [110 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=300us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=300us
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] #25
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap- CGenEn- ChkCap+ ChkEn-
	Capabilities: [300 v1] #19
	Capabilities: [340 v1] #26
	Capabilities: [378 v1] #27
	Kernel driver in use: nvme
sudo nvme smart-log /dev/nvme0 | grep '^temperature'
temperature                         : 43 C

@linuxdev Thank you for decyphering!

Yes, that is working well. PCIe x4 gen. 4. The amount of data through a gen. 4 is astonishing. The world is already working on gen. 5, and I suspect not many gen. 4 devices will come out for sale by regular users aside from solid state memory. Most gen. 4 will likely be more exotic data warehouse hardware.

from Samsung 980 pro

 sudo hdparm -tT --direct /dev/nvme0n1

/dev/nvme0n1:
 Timing O_DIRECT cached reads:   6840 MB in  2.00 seconds = 3426.48 MB/sec
 Timing O_DIRECT disk reads: 10688 MB in  3.00 seconds = 3562.59 MB/sec

0000:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd Device a80a (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd Device a801
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 33
	Region 0: Memory at 1b40000000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 <64us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
			 EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
	Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
		Vector table: BAR=0 offset=00003000
		PBA: BAR=0 offset=00002000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [168 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [178 v1] #19
	Capabilities: [198 v1] #26
	Capabilities: [1bc v1] #27
	Capabilities: [214 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [21c v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=40us
	Capabilities: [3a0 v1] #25
	Kernel driver in use: nvme



Speed 16GT/s, Width x4,
2 Likes

Couple of weeks ago I bought a Xavier AGX and Gigabyte AORUS Gen$ SSD 1TB. This is my first time using NVIDIA product, so I am kind of new bee here. While looking at the installation video it looked simple. But when I installed the SSD and booted the AGX and brought up Disks utility I am not seeing the new SSD. I am running jetpack version 4.4 Is there any config or hardware jumpers I need to set, in order for the os to recognize the SSD? Please advice.

Be sure you are using the M2 Key M connector as here.

If this is already the case, or if still not working, you may post the ouput of:

dmesg | grep pci

sudo lspci -vvv

for better advice.

siromani@siromani-xavier:~$ dmesg | grep pci
[    0.816510] iommu: Adding device 14180000.pcie to group 0
[    0.817196] iommu: Adding device 14100000.pcie to group 1
[    0.817954] iommu: Adding device 14140000.pcie to group 2
[    0.818598] iommu: Adding device 141a0000.pcie to group 3
[    0.874142] GPIO line 490 (pcie-reg-enable) hogged as output/high
[    0.874178] GPIO line 289 (pcie-reg-enable) hogged as output/high
[    1.716092] ehci-pci: EHCI PCI platform driver
[    1.716194] ohci-pci: OHCI PCI platform driver
[    4.035854] tegra-pcie-dw 14180000.pcie: Setting init speed to max speed
[    4.036903] OF: PCI: host bridge /pcie@14180000 ranges:
[    4.144082] tegra-pcie-dw 14180000.pcie: link is up
[    4.144291] tegra-pcie-dw 14180000.pcie: PCI host bridge to bus 0000:00
[    4.144298] pci_bus 0000:00: root bus resource [bus 00-ff]
[    4.144311] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff] (bus address [0x38100000-0x381fffff])
[    4.144320] pci_bus 0000:00: root bus resource [mem 0x1b40000000-0x1bffffffff] (bus address [0x40000000-0xffffffff])
[    4.144324] pci_bus 0000:00: root bus resource [mem 0x1800000000-0x1b3fffffff pref]
[    4.144347] pci 0000:00:00.0: [10de:1ad0] type 01 class 0x060400
[    4.144464] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold
[    4.144871] pci 0000:01:00.0: [1987:5016] type 00 class 0x010802
[    4.144984] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    4.146910] pci 0000:00:00.0: BAR 14: assigned [mem 0x1b40000000-0x1b400fffff]
[    4.146921] pci 0000:01:00.0: BAR 0: assigned [mem 0x1b40000000-0x1b40003fff 64bit]
[    4.146980] pci 0000:00:00.0: PCI bridge to [bus 01-ff]
[    4.146987] pci 0000:00:00.0:   bridge window [mem 0x1b40000000-0x1b400fffff]
[    4.147014] pci 0000:00:00.0: Max Payload Size set to  256/ 256 (was  256), Max Read Rq  512
[    4.147061] pci 0000:01:00.0: Max Payload Size set to  256/ 256 (was  128), Max Read Rq  512
[    4.147400] pcieport 0000:00:00.0: Signaling PME through PCIe PME interrupt
[    4.147406] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt
[    4.147418] pcie_pme 0000:00:00.0:pcie001: service driver pcie_pme loaded
[    4.147497] aer 0000:00:00.0:pcie002: service driver aer loaded
[    4.147921] nvme nvme0: pci function 0000:01:00.0
[    4.148097] tegra-pcie-dw 14100000.pcie: Setting init speed to max speed
[    4.148903] OF: PCI: host bridge /pcie@14100000 ranges:
[    4.256098] tegra-pcie-dw 14100000.pcie: link is up
[    4.256330] tegra-pcie-dw 14100000.pcie: PCI host bridge to bus 0001:00
[    4.256337] pci_bus 0001:00: root bus resource [bus 00-ff]
[    4.256352] pci_bus 0001:00: root bus resource [io  0x100000-0x1fffff] (bus address [0x30100000-0x301fffff])
[    4.256362] pci_bus 0001:00: root bus resource [mem 0x1230000000-0x123fffffff] (bus address [0x40000000-0x4fffffff])
[    4.256366] pci_bus 0001:00: root bus resource [mem 0x1200000000-0x122fffffff pref]
[    4.256388] pci 0001:00:00.0: [10de:1ad2] type 01 class 0x060400
[    4.256541] pci 0001:00:00.0: PME# supported from D0 D3hot D3cold
[    4.257099] pci 0001:01:00.0: [1b4b:9171] type 00 class 0x010601
[    4.257224] pci 0001:01:00.0: reg 0x10: [io  0x8000-0x8007]
[    4.257330] pci 0001:01:00.0: reg 0x14: [io  0x8040-0x8043]
[    4.257400] pci 0001:01:00.0: reg 0x18: [io  0x8100-0x8107]
[    4.257492] pci 0001:01:00.0: reg 0x1c: [io  0x8140-0x8143]
[    4.257559] pci 0001:01:00.0: reg 0x20: [io  0x800000-0x80000f]
[    4.257638] pci 0001:01:00.0: reg 0x24: [mem 0x00900000-0x009001ff]
[    4.257700] pci 0001:01:00.0: reg 0x30: [mem 0xd0000000-0xd000ffff pref]
[    4.258190] pci 0001:01:00.0: PME# supported from D3hot
[    4.273143] pci 0001:00:00.0: BAR 14: assigned [mem 0x1230000000-0x12300fffff]
[    4.273150] pci 0001:00:00.0: BAR 13: assigned [io  0x100000-0x100fff]
[    4.273160] pci 0001:01:00.0: BAR 6: assigned [mem 0x1230000000-0x123000ffff pref]
[    4.273178] pci 0001:01:00.0: BAR 5: assigned [mem 0x1230010000-0x12300101ff]
[    4.273240] pci 0001:01:00.0: BAR 4: assigned [io  0x100000-0x10000f]
[    4.273304] pci 0001:01:00.0: BAR 0: assigned [io  0x100010-0x100017]
[    4.273366] pci 0001:01:00.0: BAR 2: assigned [io  0x100018-0x10001f]
[    4.273428] pci 0001:01:00.0: BAR 1: assigned [io  0x100020-0x100023]
[    4.273490] pci 0001:01:00.0: BAR 3: assigned [io  0x100024-0x100027]
[    4.273552] pci 0001:00:00.0: PCI bridge to [bus 01-ff]
[    4.273557] pci 0001:00:00.0:   bridge window [io  0x100000-0x100fff]
[    4.273564] pci 0001:00:00.0:   bridge window [mem 0x1230000000-0x12300fffff]
[    4.273609] pci 0001:00:00.0: Max Payload Size set to  256/ 256 (was  256), Max Read Rq  512
[    4.273709] pci 0001:01:00.0: Max Payload Size set to  256/ 512 (was  128), Max Read Rq  512
[    4.273987] pcieport 0001:00:00.0: Signaling PME through PCIe PME interrupt
[    4.273991] pci 0001:01:00.0: Signaling PME through PCIe PME interrupt
[    4.274004] pcie_pme 0001:00:00.0:pcie001: service driver pcie_pme loaded
[    4.274087] aer 0001:00:00.0:pcie002: service driver aer loaded
[    4.276172] tegra-pcie-dw 14140000.pcie: Setting init speed to max speed
[    4.276976] OF: PCI: host bridge /pcie@14140000 ranges:
[    4.787510] tegra-pcie-dw 14140000.pcie: link is down
[    4.787704] tegra-pcie-dw 14140000.pcie: PCI host bridge to bus 0003:00
[    4.787712] pci_bus 0003:00: root bus resource [bus 00-ff]
[    4.787728] pci_bus 0003:00: root bus resource [io  0x200000-0x2fffff] (bus address [0x34100000-0x341fffff])
[    4.787738] pci_bus 0003:00: root bus resource [mem 0x12b0000000-0x12bfffffff] (bus address [0x40000000-0x4fffffff])
[    4.787742] pci_bus 0003:00: root bus resource [mem 0x1280000000-0x12afffffff pref]
[    4.787766] pci 0003:00:00.0: [10de:1ad2] type 01 class 0x060400
[    4.787896] pci 0003:00:00.0: PME# supported from D0 D3hot D3cold
[    4.788415] pci 0003:00:00.0: PCI bridge to [bus 01-ff]
[    4.788446] pci 0003:00:00.0: Max Payload Size set to  256/ 256 (was  256), Max Read Rq  512
[    4.788674] pcieport 0003:00:00.0: Signaling PME through PCIe PME interrupt
[    4.788687] pcie_pme 0003:00:00.0:pcie001: service driver pcie_pme loaded
[    4.788801] aer 0003:00:00.0:pcie002: service driver aer loaded
[    4.788944] pcie_pme 0003:00:00.0:pcie001: unloading service driver pcie_pme
[    4.789014] aer 0003:00:00.0:pcie002: unloading service driver aer
[    4.789084] pci_bus 0003:01: busn_res: [bus 01-ff] is released
[    4.789187] pci_bus 0003:00: busn_res: [bus 00-ff] is released
[    4.790647] tegra-pcie-dw 14140000.pcie: PCIe link is not up...!
[    4.791205] tegra-pcie-dw 141a0000.pcie: Setting init speed to max speed
[    4.792264] OF: PCI: host bridge /pcie@141a0000 ranges:
[    5.304131] tegra-pcie-dw 141a0000.pcie: link is down
[    5.304308] tegra-pcie-dw 141a0000.pcie: PCI host bridge to bus 0005:00
[    5.304315] pci_bus 0005:00: root bus resource [bus 00-ff]
[    5.304330] pci_bus 0005:00: root bus resource [io  0x300000-0x3fffff] (bus address [0x3a100000-0x3a1fffff])
[    5.304340] pci_bus 0005:00: root bus resource [mem 0x1f40000000-0x1fffffffff] (bus address [0x40000000-0xffffffff])
[    5.304344] pci_bus 0005:00: root bus resource [mem 0x1c00000000-0x1f3fffffff pref]
[    5.304365] pci 0005:00:00.0: [10de:1ad0] type 01 class 0x060400
[    5.304489] pci 0005:00:00.0: PME# supported from D0 D3hot D3cold
[    5.304924] pci 0005:00:00.0: PCI bridge to [bus 01-ff]
[    5.304954] pci 0005:00:00.0: Max Payload Size set to  256/ 256 (was  256), Max Read Rq  512
[    5.305182] pcieport 0005:00:00.0: Signaling PME through PCIe PME interrupt
[    5.305213] pcie_pme 0005:00:00.0:pcie001: service driver pcie_pme loaded
[    5.305320] aer 0005:00:00.0:pcie002: service driver aer loaded
[    5.305471] pcie_pme 0005:00:00.0:pcie001: unloading service driver pcie_pme
[    5.305513] aer 0005:00:00.0:pcie002: unloading service driver aer
[    5.305570] pci_bus 0005:01: busn_res: [bus 01-ff] is released
[    5.305708] pci_bus 0005:00: busn_res: [bus 00-ff] is released
[    5.307141] tegra-pcie-dw 141a0000.pcie: PCIe link is not up...!
[   57.515702] pcieport 0000:00:00.0: AER: Corrected error received: id=0000
[   57.515730] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=0000(Receiver ID)
[   57.515960] pcieport 0000:00:00.0:   device [10de:1ad0] error status/mask=00000001/0000e000
[   57.516223] pcieport 0000:00:00.0:    [ 0] Receiver Error         (First)
[   57.566210] pcieport 0000:00:00.0: AER: Uncorrected (Fatal) error received: id=0000
[   57.566234] pcieport 0000:00:00.0: PCIe Bus Error: severity=Uncorrected (Fatal), type=Transaction Layer, id=0000(Receiver ID)
[   57.566475] pcieport 0000:00:00.0:   device [10de:1ad0] error status/mask=00000020/00400000
[   57.566639] pcieport 0000:00:00.0:    [ 5] Surprise Down Error    (First)
[   57.566770] pcieport 0000:00:00.0: broadcast error_detected message
[   58.853893] nvme 0000:01:00.0: of_irq_parse_pci() failed with rc=134
[   59.896091] pcieport 0000:00:00.0: Root Port link has been reset
[   59.896118] pcieport 0000:00:00.0: broadcast slot_reset message
[   59.958397] pcieport 0000:00:00.0: broadcast resume message
[   59.958411] pcieport 0000:00:00.0: AER: Device recovery successful
[   59.958417] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958435] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958439] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958449] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958466] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958474] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958477] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958485] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958488] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958495] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958498] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958506] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958508] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958516] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958518] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958526] pcieport 0000:00:00.0: can't find device of ID0000
[   59.958548] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   59.958556] pcieport 0000:00:00.0: can't find device of ID0000
[   60.027732] pcieport 0000:00:00.0: AER: Uncorrected (Non-Fatal) error received: id=0000
[   60.027750] pcieport 0000:00:00.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, id=0000(Requester ID)
[   60.027757] pcieport 0000:00:00.0:   device [10de:1ad0] error status/mask=00004000/00400000
[   60.027764] pcieport 0000:00:00.0:    [14] Completion Timeout     (First)
[   60.027771] pcieport 0000:00:00.0: broadcast error_detected message
[   60.027775] pcieport 0000:00:00.0: broadcast mmio_enabled message
[   60.027779] pcieport 0000:00:00.0: broadcast resume message
[   60.027788] pcieport 0000:00:00.0: AER: Device recovery successful
siromani@siromani-xavier:~$ 
siromani@siromani-xavier:~$ 
siromani@siromani-xavier:~$ 
siromani@siromani-xavier:~$ sudo lspci -vvv
[sudo] password for siromani: 
0000:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad0 (rev a1) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR+ <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 33
	Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0
	I/O behind bridge: 0000f000-00000fff
	Memory behind bridge: 40000000-400fffff
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [70] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0
			ExtTag- RBE+
		DevCtl:	Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ UncorrErr+ FatalErr+ UnsuppReq- AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x8, ASPM not supported, Exit Latency L0s <1us, L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt-
		LnkSta:	Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt+ ABWMgmt+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootCap: CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported ARIFwd-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled ARIFwd-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [b0] MSI-X: Enable- Count=8 Masked-
		Vector table: BAR=2 offset=00000000
		PBA: BAR=2 offset=00010000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [148 v1] #19
	Capabilities: [168 v1] #26
	Capabilities: [190 v1] #27
	Capabilities: [1c0 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
			  PortCommonModeRestoreTime=60us PortTPowerOnTime=40us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=60us
		L1SubCtl2: T_PwrOn=300us
	Capabilities: [1d0 v1] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?>
	Capabilities: [2d0 v1] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?>
	Capabilities: [308 v1] #25
	Capabilities: [314 v1] Precision Time Measurement
		PTMCap: Requester:+ Responder:+ Root:+
		PTMClockGranularity: 16ns
		PTMControl: Enabled:- RootSelected:-
		PTMEffectiveGranularity: Unknown
	Capabilities: [320 v1] Vendor Specific Information: ID=0004 Rev=1 Len=054 <?>
	Kernel driver in use: pcieport

0000:01:00.0 Non-Volatile memory controller: Device 1987:5016 (rev ff) (prog-if ff)
	!!! Unknown header type 7f

0001:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad2 (rev a1) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 35
	Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0
	I/O behind bridge: 00000000-00000fff
	Memory behind bridge: 40000000-400fffff
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0
			ExtTag- RBE+
		DevCtl:	Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <1us, L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM L0s L1 Enabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt-
		LnkSta:	Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootCap: CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported ARIFwd-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled ARIFwd-
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [b0] MSI-X: Enable- Count=1 Masked-
		Vector table: BAR=0 offset=00000000
		PBA: BAR=0 offset=00000000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [148 v1] #19
	Capabilities: [158 v1] #26
	Capabilities: [17c v1] #27
	Capabilities: [190 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
			  PortCommonModeRestoreTime=60us PortTPowerOnTime=40us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=60us
		L1SubCtl2: T_PwrOn=40us
	Capabilities: [1a0 v1] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?>
	Capabilities: [2a0 v1] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?>
	Capabilities: [2d8 v1] #25
	Capabilities: [2e4 v1] Precision Time Measurement
		PTMCap: Requester:- Responder:+ Root:+
		PTMClockGranularity: 16ns
		PTMControl: Enabled:- RootSelected:-
		PTMEffectiveGranularity: Unknown
	Capabilities: [2f0 v1] Vendor Specific Information: ID=0004 Rev=1 Len=054 <?>
	Kernel driver in use: pcieport

0001:01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9171 (rev 13) (prog-if 01 [AHCI 1.0])
	Subsystem: Marvell Technology Group Ltd. Device 9171
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 820
	Region 0: I/O ports at 100010 [size=8]
	Region 1: I/O ports at 100020 [size=4]
	Region 2: I/O ports at 100018 [size=8]
	Region 3: I/O ports at 100024 [size=4]
	Region 4: I/O ports at 100000 [size=16]
	Region 5: Memory at 1230010000 (32-bit, non-prefetchable) [size=512]
	Expansion ROM at 1230000000 [disabled] [size=64K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fffff000  Data: 0000
	Capabilities: [70] Express (v2) Legacy Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <64us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
		DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
		LnkCtl:	ASPM L0s L1 Enabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
	Kernel driver in use: ahci

siromani@siromani-xavier:~$ 

You may check these.

Thank you so much. It worked
Added a file called grub at /etc/default/ with the following content and rebooted.
“pcie_aspm=off”

Thanks again.

Nice to see you’ve moved forward.
I am very surprised by your solution, though. Usually the way is adding boot arguments into APPEND entry of a /boot/extlinux/extlinux.conf config.
If you intend to use a partition from the NVME SSD as Linux rootfs, you would have to do that.

What @Honey_Patouceul said: Use extlinux.conf. GRUB only works on PC architecture. PCs have a BIOS to present a uniform interface to bootloaders, but embedded systems are all custom. Xavier uses CBoot in place of GRUB, and many other embedded systems (including several Jetsons) use U-Boot. The “APPEND” key/value pair appends arguments to be passed to the kernel as the kernel loads and begins running. It is a space-delimited line of content…just make sure you don’t break up that line.

You can see if your argument made its way in after booting and running the command “cat /proc/cmdline”.