We created a custom carrier board based on the dev-kit carrier board (all IO is be the same, no special stuff of any kind).
After flashing it with SDK manager (1.9.3.10904) with jetpack 5.1 (rev1), we lost the PCIe connection.
‘lspci’ shows no rootcomplex of any kind.
There is a FPGA connected on the other side to the PCIe bus, at least I expect a PCIe root complex to be there in my Linux…
Please see attached dmesg-log. dmesg.log (75.8 KB)
We know for certain the FPGA is up and running, we know it acts as an endpoint since it is the same FPGA with the same coniguration as we used before on a PCIe extention card with the same hardware.
The only difference here is the devkit vs module.
Sorry, I don’t get the point you want to say in this comment.
“The only difference here is the devkit vs module.”
Could you elaborate more about it?
You only tell us lots of “we know that”, “already know that”… .etc. So what is the exact question you want to ask if you already knew something?
Are you using PCIe C5 as root port? or something else? Does your FPGA get detected on devkit but fail on your custom board?
What could be the reason why someting on the devkit works but not with the module?
There is like not real documentation what the differences are except for the schematic…
All hardware is based on the reference schematic of the nVidia carrier board. We do not use any “special” pin configuration. FPGA configuration is the same with the same hardware and design to PCIe.
On the devkit everything works.
Using a scope we see a clock for a small time, and then it is gone. We use not customized Jetpack as a reference for this issue. Just because of the lack of documentation.,
Be it the carrier board, we added all needed signals and lanes for the PCIe (with exception of the x16 and JTAG/SPI signals). We use a x8 configuration.
At the moment 2 hardware engineers are looking into it here.
They suspect certain pins are used by the nVidia carrier board for signals while we did not connect them because they are not needed according to the PCIe standard. (eg GPIO18 I heard them say)
So are you going to share any useful info here to clarify?
I really don’t get what you want us to help. If you suspect a hardware problem on your custom board, then at least share your schematic.
Your heard GPIO18 from them? Why not just share schematic here so that we can figure it out directly but not a second-hand info from someone else I don’t even know?
Well, we can not just share schematics due to the fact this is all public.
Our supplier/distributor only directs us to this forum for any help.
(although they always promise to look into it and it remains silent for weeks…)
And in all fairness you guys provide support more quickly and overall better.
The PEX_L5_CLKREQ_N signal is connected to GND on our board while on the nVidia carrier, it isn’t.
Also on the nVidia carrier board there are other PCIe devices on the bus wich keep the bus up. While on our carrier board the only PCIe device is our FPGA.
The FPGA is up and running before even booting the nVidia module. We wait for the FPGA to configure before setting the XAV_PERIPH_RESET.
This is the result from devkit. There are several PCIe controller on jetson and they are all independent.
The 0001:xxxxx devices are on C1. Which is not same as the x8 slot you are using.
Your problem is on C5. As C5 and C1 are not related to each other, your “There are several PCIe controller on jetson and they are all dependent” is wrong. C5 is not enabled even on NV devkit case if there is no device connected.