Does the Jetson TX2 eMMC report correct life time estimation at runtime?

Hello all,

One of our devices built on top of an Jetson TX2 is throwing the kernel panic listed below.

mmc0: Data Timeout error, intmask: 10c000 Interface clock = 196249804Hz
[   38.553844] sdhci: Sys addr: 0x00000008 | Version:  0x00000404
[   38.553847] sdhci: Blk size: 0x00007200 | Blk cnt:  0x00000058
[   38.553851] sdhci: Argument: 0x0004008 | Trn mode: 0x00000033
[   38.553854] sdhci: Present:  0x11fb00f1 | Host ctl: 0x0000003d
[   38.553857] sdhci: Power:    0x00000001 | Blk gap:  0x0000000
[   38.553860] sdhci: Wake-up:  0x00000000 | Clock:    0x00000007
[   38.553863] sdhci: Timeout:  0x0000000e | Int stat: 0x0001c000
[   38.553866] sdhci: Int enab: 0xffff000 | Sig enab: 0xfffc4000
[   38.553869] sdhci: AC12 err: 0x00000000 | Slot int: 0x00000000
[   38.553872] sdhci: Caps:     0x3f6cd08c | Caps_1:   x18006f77
[   38.553875] sdhci: Cmd:      0x00002c1e | Max curr: 0x00000000
[   38.553877] sdhci: Host ctl2: 0x0000300d
[   38.553881] sdhci: ADMA Err: 0x00000000 | ADMA Ptr: 0x0000000ffee3090
[   38.553902] sdhci: ===========================================
[   38.554070] BUG: scheduling while atomic: swapper/0/0/0x00000102
[   38.554083] Modules linked in: overlay bnep cdc_acm mttcan can_dev bluedrod_pm bcmdhd cfg80211 spidev nvgpu
[   38.554087] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.140-l4t-r32.4+g166b394331e2 #1
[   38.554089] Hardware name: quil (DT)
[   38.554090] Call trace:
[   38.554100] [<ffffff800808c228>] dump_backtrace+0x0/0x168
[   38.554103] [<ffffff800808c3b4>] show_stack+0x24/0x30
[   38.554108] [<ffffff800845bf08>] dump_stak+0x94/0xbc
[   38.554112] [<ffffff80080e5300>] __schedule_bug+0x70/0x80
[   38.554116] [<ffffff8008f2de84>] __schedule+0x504/0x570
[   38.554118] [<ffffff8008f2df30>] scheule+0x40/0xa8
[   38.554120] [<ffffff8008f30f84>] schedule_timeout+0x19c/0x400
[   38.554123] [<ffffff8008f2eb14>] wait_for_common+0x9c/0x128
[   38.554125] [<ffffff8008f2e04>] wait_for_completion_timeout+0x2c/0x38
[   38.554129] [<ffffff8008ba9170>] cmdq_halt+0x50/0xe0
[   38.554132] [<ffffff8008b842f0>] mmc_cmdq_halt+0x90/0b0
[   38.554134] [<ffffff8008ba9d74>] mmc_blk_cmdq_complete_rq+0x8c/0x1d0
[   38.554137] [<ffffff8008baec7c>] mmc_cmdq_softirq_done+0x2c/0x3
[   38.554141] [<ffffff8008434948>] blk_done_softirq+0x88/0xa0
[   38.554143] [<ffffff8008081040>] __do_softirq+0x140/0x38c
[   38.554147] [<ffffff80080baa80>] irq_exit+0xd/0x118
[   38.554151] [<ffffff800811f988>] __handle_domain_irq+0x70/0xc0
[   38.554152] [<ffffff8008080d24>] gic_handle_irq+0x54/0xa8
[   38.554154] [<ffffff8008082c28>] el1irq+0xe8/0x194
[   38.554156] [<ffffff8008b7fb68>] cpuidle_enter_state+0xb8/0x380
[   38.554157] [<ffffff8008b7fea4>] cpuidle_enter+0x34/0x48
[   38.554160] [<ffffff8008110638>] cal_cpuidle+0x40/0x70
[   38.554162] [<ffffff800811094c>] cpu_startup_entry+0x154/0x210
[   38.554166] [<ffffff8008f29cd4>] rest_init+0x84/0x90
[   38.554171] [<ffffff8009900b40>]start_kernel+0x378/0x390
[   38.554173] [<ffffff8009900204>] __primary_switched+0x80/0x94
[   38.554270] softirq: huh, entered softirq 4 BLOCK ffffff80084348c with preempt_count 00000101, exited with 00000000?
[   38.555182] Unable to handle kernel read from unreadable memory at virtual addres ffffffc1eb736900
[   38.555183] Mem abort info:
[   38.555184]   ESR = 0x8600000e
[   38.555186]   Exception class = IABT (current EL), IL = 32 bits
[   38.555188]   EA = 0, S1PTW = 0
[   38.555191] swapper pgtable: 4k pages, 39-bit VAs, pgd = ffffff800a4e8000
[   38.555195] [ffffffc1eb736900] *pgd=00000002771f4003, *pud=0000002771f4003, *pmd=00e800026b600711
[   38.555198] Internal error: Oops: 8600000e [#1] PREEMPT SMP
[   38.555208] Modules linked in: overlay bnep cdc_acm mttcan an_dev bluedroid_pm bcmdhd cfg80211 spidev nvgpu
[   38.555212] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W       4.9.140-l4t-r32.+g166b394331e2 #1
[   38.555213] Hardware name: quill (DT)
[   38.555214] task: ffffff800a181580 task.stack: ffffff800a170000
[   38.555218] LR is at 0xffffffc1eb736900
[   38.555220] pc : [<ffffffc1eb736900>] lr : [<ffffffc1eb736900>] pstate: 20400045
[   38.555220] sp : ffffffc1f676de00
[   38.555223] x29: 000000000000000 x28: ffffff8009b26000 
[   38.555226] x27: ffffff800a1750c0 x26: 0000000000000101 
[   38.555228] x25: ffffffc1f676de80 x24: ffffff800a1750c0 
[   38.555230] x23: 000000000000010 x22: ffffff8008ba92b8 
[   38.555232] x21: ffffffc1f676ddf0 x20: ffffff8008ba93d0 
[   38.555235] x19: ffffffc1f676de50 x18: 0000000000000007 
[   38.555237] x17: 000000000000000 x16: 0000000000000003 
[   38.555239] x15: 0000000000000030 x14: 3030303020746e75 
[   38.555241] x13: 6f635f74706d6565 x12: 727020687469772 
[   38.555243] x11: 3063383433343830 x10: 3038666666666666 
[   38.555245] x9 : 204b434f4c422034 x8 : ffffffc1f674054f 
[   38.555247] x7 : 0000000000000000 x : 0000000012b6bcd1 
[   38.555249] x5 : 0000000000000000 x4 : 0000000000000000 
[   38.555251] x3 : ffffffffffffffff x2 : 00000041ecc44000 
[   38.555253] x1 : ffffff800a18150 x0 : 0000000000000069 
[   38.555253] 
[   38.555255] Process swapper/0 (pid: 0, stack limit = 0xffffff800a170000)
[   38.555256] Call trace:
[   38.555258] [<ffffffc1eb736900>] 0xfffffc1eb736900
[   38.555264] ---[ end trace addf0bf4cebb20bc ]---
[   38.561270] Kernel panic - not syncing: Attempted to kill the idle task!
[   38.561273] SMP: stopping scondary CPUs
[   38.561290] Kernel Offset: disabled
[   38.561291] Memory Limit: none
[   39.060727] Rebooting in 5 seconds..r - trusty version Built: 21:16:26 Jun 25 2020 

As this is a device which has probably more runtime than any of our other devices, and our application writes a lot of logs, I assumed the eMMC might be worn out. Hence, I checked with mmc extcsd read /dev/mmcblk0. Here’s the output:

=============================================
  Extended CSD rev 1.8 (MMC 5.1)
=============================================
... removed for brevity ...
eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x01
eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x01
eMMC Pre EOL information [EXT_CSD_PRE_EOL_INFO]: 0x01

According to the this document, this should mean the eMMC is worn out 0-10%. But this of course requires the SoM actually supports reporting correct values. It could report dummy values, after all.

Hence, my question are:

  • Is the Jetson TX2 reporting correct values to mmc command?
  • Can I reliable assume eMMC is NOT worn out, if the values reported to the mmc command are < 0x08, that is < 80-90%?
  • According to this article TYP_A refers to the SLC and TYP_B refers to the MLC blocks of the eMMC. That both values are unequal 0x00 suggests that the eMMC is using both memory technologies. True? was not aware this is even possible, I thought an eMMC can either be of this or that.
  • Does the eMMC do wear-leveling? As said, our application logs a lot and the file it logs to is at a specific place. When we write a lot into this file, are only this files blocks worn out, or is the wear spread over the whole eMMC?
  • Does the eMMC support the TRIM command?
  • Does the eMMC also do wear-levelling if it gets not informed about unused blocks via the TRIM command?

Context information:
Our device is running a self-written GNU/Linux distribution created using the Yocto project and meta-tegra with L4T 32.4.3.

Thank you in advance!

Please reply.

I can’t answer, but this combination tends to suggest this is a software problem, and not hardware (though possibly influenced by slow hardware response):

Scheduling while atomic implies something trying to run and preempt atomic code sections, and the nature of atomic is to not allow this. More likely it is a programming error in some rare corner case. There is a possibility that such a corner case only occurs when the hardware responds slower than usual, and statistically speaking, the longer it runs the more likely it’ll hit one of the slowest response cases. Don’t know, and it could be hardware, but hardware really shouldn’t have any control over an attempt to preempt an atomic code block.

I do see the overlay driver is installed. Is there any overlayfs being used? I ask because not many people use this, and perhaps if there were such a rare corner case, then it would be useful to know if something new is present.

First of all, thanks input. That really is helpful.

The developer collegue who reported this said the problem only occured when the ethernet cable is not connected. Our device also has WiFi connectivity, hence a disconnected ethernet cable does cause a different codepath inside the WiFi driver to be taken. We couldn’t observe this problem on any other of our devices, though.

I’m no kernel dev, but kernel development is something I am interested in. If I would like to dig deeper, is there something to read/learn/whatever you can recommend? How would one tackle this? Perhaps my company will give me some time to work on this.

I do see the overlay driver is installed. Is there any overlayfs being used? I ask because not many people use this, and perhaps if there were such a rare corner case, then it would be useful to know if something new is present.

We’re using Docker and its using overlayfs.

This won’t be in any specific order, and not really arranged well, but some things to research and learn follow…

  • Inside the kernel drivers and other software use hardware addresses…addresses on a bus which actually talk to devices. User space (outside of the kernel) instead has virtual addresses assigned. These addresses are something the memory controller assigns, and are translated back and forth when needing a physical address. The first thing to note in the bug is that it has an inability to write to a virtual address. So it wasn’t a driver to hardware failing, it was a misuse of an address which the memory controller does not believe to be valid (or else was valid but somehow locked out from response).
  • Interrupts are either “hardware” interrupts (using a wire to trigger a hardware bus), or else “software” interrupts. Inside the kernel, whenever a driver to some hardware is called, it is triggered via an ordinary “interrupt”, or hardware interrupt. There are a number of “sort of” drivers which provide functions not requiring access to hardware, e.g., perhaps there is some function implemented to reply with the content of a memory location which has no need to access some add-on hardware. These also trigger use via an interrupt, but this is a “software” interrupt, and shows up only with software logic without the need of any actual wire going high or low. The kernel has a sort of “daemon” to manage soft interrupts, and this is “ksoftirq” (you could call this a scheduler since it operates on time slices instead of wires going high or low). Having an error detected by ksoftirq (a softirq) implies there was a reference to kernel code which is not directly tied to hardware.
  • Every software running in the kernel is competing with other software for a time to run. The mechanism for doing so is a “scheduler”. As you learned above, the scheduler basically deals with interrupts and will decide whether to save state of something now running and boot it out momentarily for some other process, or whether to let the current activity continue while making the interrupt wait. Some code mandates that it must run to completion, and this is atomic code. Your bug triggered an attempt to schedule (make interruptible) a section of code which cannot be allowed to interrupt. This is a violation, and interrupting such code could cause any number of errors, so it is fatal to whatever tried to interrupt. Usually this sort of error is a software programming error. Trying to synchronize threads in user space is difficult enough, and trying to do so in kernel space is more difficult. Likely you found some code in a corner case and most of the time this attempt to preempt atomic code is rare.
  • I asked about the overlayfs being linked in because a number of people have had issues getting this running correctly. Many will have made modifications to the kernel itself, and could add bugs if overlayfs required patching. Additionally, not many people are running this, so if there is a corner case, then I’d consider this to be a good starting point if the other systems don’t have this issue.
  • Note that because interrupts and scheduling is essentially a way to get multiple things working together, and because everything in the kernel can access any part of the kernel, that a programming bug is more likely to have seemingly unrelated pieces of code interfere with another. An example might be the timing of an interrupt which simply differs as to when other drivers are told to stop and wait while others are serviced.
  • Note that the memory controller knows which user space process is allowed to access which memory, and that sometimes there is “shared” memory. So normally memory would start off in user space as accessible only by itself, but could map in memory which is available for some other process to also read or write. If the memory controller sees an attempt to read or write memory which is not allowed, then there is an exception. Quite often that exception is from user space code trying to use uninitialized memory, or memory which has previously been released. It is also possible that something like shared memory would have an exception if other memory controller activity has not yet released it from within an atomic code section (to prevent corruption of two simultaneous accesses to the same physical memory which the memory controller is mapping). Note that the “virtual” address two different processes might see to access the same shared memory would still be the same physical address, but only the memory controller would know that.

I would guess that if you wanted to study this then you might start by looking at the “interrupt vector table” which starts all physical hardware drivers. When the kernel first loads, and nothing has yet run, and the kernel has just been copied into memory, a first interrupt is triggered. This is where it all begins.

You’d also want to understand hardware interrupts leading to this table of interrupts, and that addresses referred to in the kernel are actual physical memory bus addresses. Thus a kernel can use assembler branch instructions. The biggest example of this are kernel modules which are loaded into memory in a physical address below that of where the kernel loads and any use causes “direct branch” instructions to simply redirect the running point to somewhere that is a module’s physical address.

There are actually a lot of books and tutorials out there. If you wanted to learn “practical” code then I will suggest you start with tutorials on writing kernel modules. These are the least risky to experiment with, the most convenient to experiment on, and lets you get an idea of kernel programming without needing to know a lot about interrupt tables. You might find a tutorial on modules which introduces concepts of atomic versus non-atomic code.

Incidentally, since hardware interrupts require an actual physical wire to trigger, then only a CPU core where the wire can reach can be used for that driver. All CPU cores can execute software interrupts since it is just something that starts in memory and does not talk directly to actual hardware. If you have too much hardware IRQ activity and the available core for that hardware cannot service the interrupt in time, then it is called “interrupt starvation”. Thus it is best to have the smallest possible piece of code run during a hardware IRQ, and then to hand off any other function not directly bound to the hardware to a soft IRQ. I mention because although this is not the bug you ran into, there is a resemblance between code blocked from running because there simply are not enough resources, versus code blocked from running due to a bug trying to access something in an atomic code section. It is quite possible to see an atomic code section in both hardware and software drivers.

1 Like

Mind = blown. Well, thank you!! again. Knowing about these basics will kickstart my attempts to work on the linux kernel. Very much appreciated!

Considering what you said, especially that the problem could be in some unrelated part of the kernel, I now assume the WiFi kernel module to be the culprit. It’s causing problems for months. While booting our Jetson TX2-based it already restarts 3 times. It also restarts everytime it leaves coverage of a WiFi network and needs to roam to another one.

# grep -niI 'Dongle Host Driver, version' /var/log/messages 
899:Oct 13 12:48:17 localhost user.warn kernel: [    8.532308] Dongle Host Driver, version 1.201.82 (r)
968:Oct 13 12:48:24 localhost user.warn kernel: [   23.941350] Dongle Host Driver, version 1.201.82 (r)
1035:Oct 13 12:48:25 localhost user.warn kernel: [   24.918091] Dongle Host Driver, version 1.201.82 (r)

A test case would probably be to unload that kernel module and see if that gets rid of the kernel panics. Testing is just a bit hard, because these kernel panics are very sporadic. That is, sometimes they appear constantly, and then not at all for quite some time.

@ nvidia support:

This conversation has diverted from the initial topic a lot, but I would still ask you to answer the questions in the initial post of this thread. Couldn’t find details on the eMMC wear levelling anywhere and am still interested.

This is quite possible. Two different software sections need to interact before one can try to illegally change context within an atomic code section. The question would be whether WiFi triggers a bug in the other driver, or whether the other driver is reacting to something the WiFi has done illegally related to forcing preemption in other code’s atomic section.

Unfortunately I cannot answer the wear leveling question. All I can say is that the bug is unlikely to be hardware-related.

1 Like

Good to have our agreement. We’ll pay attention to our WiFi driver asap.

@nvidia: Again, would welcome some official information on the wear levelling capabilities of the Jetson TX2. Couldn’t find anything, anywhere.