We’ve been having an issue with the DevKit + TX2 where certain USB audio capture devices (plugged into the micro USB port via OTG cable) which aren’t powered externally “lose” their power at a regular interval.
We put a 1A draw on it and the VUSB is a stable 4.88V loaded, but the enable keeps cutting out / power cycling.
We analyzed USB0_EN_OC (Jetson A17) and that appears to be the culprit. It’s enabled for 6 or 7 seconds, and then it disables for 1-2 seconds. Then it re-enables for 6-7 seconds, so on and so-forth.
Does anybody know what’s going on? It seems like some SW behavior, but the reasoning doesn’t make much sense to us.
Just FYI, a USB 2.0 current output is rated at 500 mA for self-powered (and the micro-USB is self-powered). Charging outlets might go to 1.5 A. If power is cutting out at 1000 mA and then restoring, then this is because it is working the way it should. I would have to wonder, if you defeat the current limit mechanism, perhaps the rail will end up with permanent damage.
It did not work. To be clear this is the “enable” pin (A17) that is disabling causing the port to power down, which shouldn’t have any load on it. What pin would the Jetson use to “sense” an overdraw and disable that A17 pin, if this over-current disable in the DTB was enabled? There doesn’t seem to be any feedback to the Jetson to sense this.
Also, the behavior persists with no load on the system, just checking the enable pin the behavior persists.
Is there anything in the SW that could be disabling this A17 pin?
Yeah, I had thoughts about this too, but it doesn’t seem to be load related for 2 reasons:
Behavior of the enable pin disabling persists with no load on the USB port
There doesn’t seem to be any feedback to the Jetson regarding overdraw, so not sure how the Jetson would know to cut the enable pin.
We’re using a TPD3S0x4 to limit current on our product limiting to 1.5A, but since this is happening with the dev kit too (an Nvidia supported device), trying to debug there first.
Seems like something is driving this pin low, as there’s an internal pull-up. It has to be something in SW, but we don’t see a way of it sensing OC to do-so.
There is more than one way to deal with overload protection, and I’m not positive what the Jetsons do for this. It might be active, or it might be a polymer fuse. If you take a Jetson which has had a day to cool down (polymer fuses melt and then cool and reform…this can take a day…if cooling reestablishes current, but is still warm, then remelting occurs more easily), does this pulsing still occur?
Also, I don’t know what the rail itself can handle. Keep in mind that the 500 mA limit should never be exceeded on a single unpowered USB port (or on all combined ports if a HUB is running unpowered on the port), but the rail which provides that 500 mA could possibly be powering more than that port. It would be expected for each USB PHY to have limits, but the rail providing those limits probably also has limits and most likely is the source of power to more than just that one port.
The micro-USB isn’t a charging port, so 1.5A is out of question on this USB2 port.
This happens on every Jetson and it’s too consistent for me to think it’s any sort of fuse. 6-7 sec on, 1-2 sec off. Freshly powered up or been on for an hour, same behavior. As soon as it boots up the power kicks on , lasts 6-7 seconds, and then the cycle starts. I too wonder if it’s active or not. I don’t see how it could be as I don’t see any feedback from our current limit chip.
I think don’t focus so much on the current as this is happening on an unloaded system. The enable pin just shuts off in a set pattern.
I’m just wondering if some over-current condition might have caused something which is sticking around. If there are no devices connected at all to usb, what do you see with “lsusb”? Also, given that the micro port is USB2, when you run “lsusb” you should see the USB2 root HUB:
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
You can see detailed information on that specific HUB with:
sudo lsusb -d 1d6b:0002 -vvv
I don’t know if it is convenient, but you might try to run that and see what lsusb shows differently between the on times and off times. You only need a subset of that, so I’m going to suggest you install “gawk” (“sudo apt-get install gawk”), and then run this while also watching the rail voltage:
while :; do echo ""; lsusb -d 1d6b:0002 -vvv | gawk '/^Hub Descriptor/,/Remote Wakeup/'; sleep 1; done
I don’t know if there is something in “/sys” which could be monitored (power rails usually have status there, but I don’t know for this specific rail), but if there is it could be added in that above command. If you see something change, then you could hit control-c and stop the loop. The console should have some history in it and you can scroll back and copy and paste if something changes or looks interesting.
Perhaps there is some other interaction going on, e.g., sleep (I’m just speculating). Another possibility is that when in host mode power is delivered, but in device mode power would normally be consumed (but the device mode is self-powered, so power delivered to the connector would be ignored at the Jetson end and power source would be the host if Jetson is a device). For the case of the Jetson itself being in some sort of power savings mode you might run this before checking rail voltages:
sudo nvpmodel -m 0
(in releases prior to R32.1 you need to use the full path to the commands)
The sleep mode is a good point, but we’re using model 0 already, so it’s at least not related to those modes, but maybe something USB specific.
In 28.1 nvpmodel was in the path, but the jetson_clocks script wasn’t. That’s just a side note, though.
I do see the root hub, yes.
I ran that command looking for changes, I didn’t see any changes at all unloaded, but I can’t see voltage, only current, and current is obviously 0 unloaded.
I’ll try what you suggested loaded and look for any changes. Though unfortunately it won’t be for a few weeks, I timed this question badly & am stepping out for a few weeks, won’t have access to testing equipment. I will try it when I return.
It definitely seems something SW related, that’s for sure, as it’s happening both in our custom HW which uses the Jetson, as well as the Nvidia Dev Kit / Carrier, but we just don’t see how it could be an overdraw as there’s no feedback to the Jetson, enable pin A17 doesn’t have a load, so it shouldn’t be that.
Hmm, I thought there was no load on A17, only enable. Let me re-check & check A18 too.
Let me check.
Same, I don’t see those over-current messages.
I have a snippet of the dmesg log posted, I don’t see any of those messages about over-current. We’re seeing this disable behavior on A17 without a load, so I’m still having a hard time understanding how it could be a load issue. In the dmesg log there seems to be something happening with USB. The cable state 2 I think is expected (?), as the device is connected via OTG cable. But I’m not sure what the 5 command means.