We are porting the seL4 microkernel to Xavier NX. To support loading the ELF binaries, the program is copied onto correct place, then we clean the data cache + invalidate the instruction cache with sequence like this:
dc cvau, x0
ic ivau, x0
Where x0 is iterated over the virtual addresses. Question #1: When we jump to loaded code with I-cache disabled (SCTLR_EL1.I == 0), we get illegal instruction exception. With the I-cache enabled (SCTLR_EL1.I == 1), code runs OK. Why is this?
Question #2: With I-cache enabled, our project runs somehow, but halts completely at random places. Even the Lauterbach debugger cannot connect to Carmel CPUs anymore. We realize the Carmel cores are NVIDIA proprietary design with ARMv8.2 front-end. The documentation says that the internal translator stores the micro-ops to system RAM and proper carveouts are needed. We found out how CBOOT passes available memory to Linux via DTB, punching the carveouts in the DRAM memory map. We suppose it is enough to use only the free memory areas listed in the DTB in order to avoid clashes with the aforementioned translation mechanism. Is that true?
Last but not least, Question #3: it seems when we repeatedly try to boot our seL4 project, it repeatedly gets better (or worse, depending on the test) for subsequent retries, suggesting that it has something to do with the die temperature / voltages / frequencies. As we do not run Linux, we do not have any code doing DVFS or thermal throttling. However, we understood from the L4T BSP that BPMP does that and the stability of CCPLEX is not dependent whether or not you run Linux with all its DVFS drivers and things like that. Is this true? How can our perceived temperature dependency be explained?