Unable to mount more than 10 USB devices

Hi everyone!

I recently purchased an Nvidia Jetson Nano, and while it has overall been pretty awesome, I’ve had a bit of trouble moving over my RAID-Z from my previous setup of an ODroid XU4.

My setup for my ODroid was 12 USB 3.0 hard drives (plugged into 3 powered USB 3.0 hubs), then making a RAID with ZFS in userspace.

I had no trouble getting ZFS set up and working on the Jetson, but when I tried doing the RAID on the board, I was a bit surprised to see that only 10 of the drives are showing up when I do ls /dev/sd?. I’m still able to mount the RAID, actually, but it’s in a highly degraded state, since, unsurprisingly, it can’t see two of the necessary drives.

I have tried a bunch of permutations of rebooting the device to make sure drives (or the SATA cables weren’t broken), and it looks like each individual drive seems to work fine when plugged into the Jetson (not to mention that the setup still works on my ODroid).

I was curious if anyone here knew of something I was doing obviously wrong? I checked and made I was running in 10W mode (thinking that maybe USB devices are limited with low-power mode). Is there some configuration somewhere to set the maximum number of USB drives mounted? Do I have to muck with kernel settings?

Hi,
You should hit the limitation of EP( End Point ). It is hardware limitation on all Jetson platforms.
A similar topic:
https://devtalk.nvidia.com/default/topic/1057013/jetson-tx2/usb3-xhci-driver-device-count-limitation/post/5359521/#5359521

Oh darn, that’s a bit depressing; looks like I finally have an excuse to set up HDFS :).

What is the performance like for zfs over USB? I heard it’s performance is highly dependent on the amount of ram available, 1GB per TB being about optimal. I’m curious about the sustained reads/writes on such a setup if you have benchmarked.

I didn’t benchmark it thoroughly; the closest thing I have is some anecdata when I would rsync huge blu-ray rips to it, which peaked at about 22MB/s (emphasis on the capital B). The XU4 only had 2GB of memory, so I don’t know how much faster it would be if it had more RAM.

That said, I never really had any issues with it. It’s a bit sad that I need to trash the ZFS setup to move to something more complicated, since I just gave away my XU4.

Are all disks that you have in your array identical? If so you might just consider using plain old raid-6. It’s what I use on my server (which is used mostly for backing up my blu rays, same as you). The write performance is terrible because of parity checking, but the sustained read speeds beats many ssds.

They are all more-or-less identical; 12x1tb 2.5" spinning disks…different brands though.

But, in this particular case, since I can’t plug in all 12 drives into one system, as the nvidia mod seemed to imply, I can’t use any traditional RAID sadly. I bought 5 Jetson Nanos, so I’m going to look at HDFS or GlusterFS and distribute the load across the cluster. It’s not ideal but I was planning on doing some Apache Spark work anyway, so getting a Hadoop cluster set up isn’t that weird.

Inherit a bunch of old laptops? You sound a lot like me. My spouse is constantly complaining that I have too many drives, (and enclosures). It’s kind of silly. I never throw any drives away.

I did something similar with a cluster of raspberry Pis once upon a time, but wasn’t happy with the performance. It’s probably better with usb-3, gigabit ethernet, and a faster cpu, however. Looking forward to hearing how you set it up and how it performs in the end.

Inherit a bunch of old laptops? You sound a lot like me. My spouse is constantly complaining that I have too many drives, (and enclosures). It’s kind of silly. I never throw any drives away.

Heh, no, not exactly; laptop hard drives are just cheaper overall on eBay, and if you have enough of them they’re typically pretty fast. For what I have been doing, they’ve been fast enough, and when they break they’re cheap enough to replace, and they take a lot lower power than equivalent 3.5" drives.

I did something similar with a cluster of raspberry Pis once upon a time, but wasn’t happy with the performance. It’s probably better with usb-3, gigabit ethernet, and a faster cpu, however. Looking forward to hearing how you set it up and how it performs in the end.

I can share the scripts I wrote in a PM if you want; I was using Docker Swarm with the ZFS mount shared using NFS, and using the filesystem as a very simple queue to transcode and compress my blu-ray rips across all the nodes in the cluster. I was doing software transcoding with x264, getting around 12fps per odroid, but since I could be transcoding up to 5 things at once, it wasn’t too bad to compress a bunch of stuff. Part of the reason I bought the Jetsons was because they have hardware-accelerated HEVC compression, which will speed this up considerably.

I don’t think WD sells them anymore but my server is stuffed with WD Greens. They’re pretty low power and are configured to shut themselves off if not in use.

Thanks for the offer, buy I don’t need to transcode since I don’t stream remotely. If I did, I would probably still use software since the quality will always be better and the server’s CPU can handle it. I did have Plex server handling all that for a while so I could transcode and stream from anywhere, but it never ended up getting any use so I uninstalled it. Instead I scp something if I really want to watch something on the server.

Good luck with your software for Jetsons, however. Gstreamer can certainly do what you want it to.

To anyone interested in how I resolved this, I ended up dividing up all my hard drives across my different Jetsons, and then used GlusterFS to make a distributed filesytem. It’s not perfect, but it works and is quite fast.

On the one hand, this sounds awesome!
On the other hand, I’m glad it’s not me who is administering something like that :-)

It’s actually not so bad; I’m not a sysadmin or devops person by trade (mostly just a software engineer and eccentric math enthusiast). This is just a thing I do in my basement for fun. They’ve made stuff like GlusterFS pretty idiot-proof these days, which is good for me :).

If someone thinks they’d benefit from it, I’d write a tutorial on how I set this up with Kubernetes and the like.

I’m sure somebody might find it fun. Personally, I’m with Snarky and that sounds like too much work to maintain. I had a Pi cluster doing something similar once upon a time and I think it lasted about a week before I repurposed the Pis for other purposes. It’s neat to set up just to learn how, but for me it wasn’t worth keeping it long term.

Well, fingers crossed, I’ve been running it for a week with almost no issues. It’s a bit sad that I only get 1/3 of the total space, but it’s been pretty painless overall. I’ve been wanting to get more into Apache Spark anyway, and so the fact that I can make this work double-duty with the Gluster Spark drivers will (hopefully) save me the headache of getting HDFS set up.

EDIT:

I’m a goofball and double-posted! Ignore this one.

Ouch. Are you sure it’s worth it over traditional raid, given the number of drives you have?

Hi Everyone,
If you are interested in Edge DevOps please read this message.
We are hosting a series of webinar on Edge AI and Machine Learning. Our first webinar is “Edge DevOps” and it’s scheduled for 8/21/2020 9:00a-10:00a PT. Please join our webinar from the below link.
https://us02web.zoom.us/meeting/register/tZckduysqz4rEt1UxuHwzMQAZhe8R2F0G2fG