Jetson TK1

Do you have a component in mind for the USB audio capture? If you can find that bit (with Linux support), it’ll probably all come together. I’ve used multichannel interfaces like this before ([url]http://tascam.com/product/us-1800/[/url]), although those are large compared to the Jetson. Otherwise it sounds like a good application for TK1. You’ll want to implement your audio processing / beamforming in CUDA to achieve the throughput.

Enabling high-speed USB3 on the Jetson is fairly straightforward, here are directions from the wiki— [url]Jetson/Cameras - eLinux.org

Hopefully you’ll be able to get all 6 devices streaming smoothly using the generic driver (good luck and let us know how it goes). In case your soundcard’s driver has quirks with a particular API, I like using rtAudio (open source project [url]http://www.music.mcgill.ca/~gary/rtaudio/[/url]) so that you can easily change your application between ALSA/Pulse/OSS.

You shouldn’t be posting the same post to several sub-forums.

I answered in the other sub-forum as I happened to read that first:

[url]https://devtalk.nvidia.com/default/topic/805526/linux-for-tegra/jetson-tk1-microphone-array/post/4429188/#4429188[/url]

But please don’t answer there but keep the discussion on this thread. And as said above, it’s interesting to follow other Jetson projects, so keep us in the loop and don’t hesitate to ask more questions :)

When using a “normal USB audio card” the onboard integrated audio card on Jetson is not used at all. I’m not really sure how the “USB Analog to Digital Audio Converter” you linked above works.

If by “processing” you mean filtering etc. of the audio signal, then that’s often not done in the sound card anyway but in the CPU (or GPU) and for that it doesn’t matter if you use external USB sound card or the Jetson’s integrated one.

I tried to google for USB audio devices with multiple inputs but I couldn’t find anything. There are some devices with stereo input that might already help a bit. There is also a bunch of really small USB sound cards with mono input starting from 1$ in Ebay but in your case you might want to aim for some quality in the sound cards to be able filter out the noise and detect the actual sounds.

Do you have an Ubuntu PC? It should behave quite similarly to Jetson.

I’m listing some commands here that work on my Debian but they should be the same for any Ubuntu. There are multiple ways to do this but I’m using command line tools as I assume they might be the most suitable for your use case. You could of course also write your own program to record and optionally filter all 6 cards at the same time.

List all capture cards:

arecord -l

Configure inputs and volumes (replace with a number from the arecord):

alsamixer -c <N> -V capture

And then try recording something (device subdevice , mono, 44.1kHz, signed 16bit, new file every hour):

arecord -D hw:<X>,<Y> -f S16_LE -t wav -c 1 -r 44100 --disable-resample --disable-channels --disable-format --disable-softvol --max-file-time=3600 --use-strftime %Y-%m-%d-%H-%M-%v.wav

Not all cards provide mono (even though they really are mono only), or 44.1kHz, so you need to try what it really supports. It may also need some alsamixer tweaking to get the microphone settings so that you can hear something. When you get that, try rebooting the Jetson to make sure it stores the settings properly. You can also use alsactl to explicitly store and restore the settings for your setup.

Thanks for the update! It’s always interesting to hear about the progress of different projects :)

I don’t know much about audio but with GStreamer it should be quite easy to record audio, run it through any filtering plugins and store it to disk.

At least “audiofx” plugin seems to provide different filters:

gst-inspect-1.0 audiofx

There are a several ways to accomplish this, it mostly depends on how proficient you or your team is at writing software.

  1. The first way such a task could be accomplished is to treat the Lidar as a separate audio track, and write the information from the Lidar as fake, encoded audio samples. You have to match up the Lidar sample size with the audio sample size so the two stay in synch. When you go to play the samples backs, you play the audio track(s) through the audio system, and then read and visualize the Lidar information. Naturally, you’d have to write some custom software, but I think you’re expecting that.

1+) Write the Lidar data to a separate file with some sort of synchronization element so that the audio and Lidar information stay synchronized. This is probably the easiest route to take if frame synchronization is not critical (which doesn’t seem likely given your use case, you can probably be off by 100ms or so). It also makes playback easier. The audio can be played normally, you can use the audio time to index into the Lidar data. Note: The synchronization element can be as simple as starting the Lidar recording at ‘the same time’ with a known frame size on the Lidar data so that you can translate between the audio time and Lidar data easily. For example, you may want to scrub to time 03:45:32 in the audio, you need to be able to find the corresponding Lidar information for that time.

  1. The way most computer video/audio ’ formats work and accomplish this task can be thought of as ‘containers’. At a high level, a time code stamp labels a container, which contains information such as a video frame, audio frame, and/or other data elements. An example typical data element that you’re probably familiar with is closed captioning/subtitling. It’s also how DVDs and such do multiple languages, the user is simply picking the elements in the container to ‘play’. When playing a resource, you can think of ‘time’ running, the program translates to the time stamp for the associated container, decodes the information inside, and presents it to the user on the appropriate output device. Obviously it’s much more complicated than that due to things such as compression algorithms for audio and video, but that’s the high level idea. This is probably the most thorough technical approach which on the development end means that you have to write or use a codec (compressor/decompressor) for the Lidar information to add it to the audio/video data stream. Note that it’s possible to have ‘movie’ files without video tracks, a lot of people use the flexibility of the formats to do all sorts of ‘interesting’ things. These formats have names such as mpeg, QuickTime, Ogg Vorbis but there are a large number of them.

That should get you started in your exploration.

Am I correct that it sounds like all audio cards are connected through the same full-sized port on Jetson?

Have you considered the mini-PCIe slot for a second USB controller (I don’t know if it is a “total resources” issue or “specific to this USB port issue”)?

If you get a micro-A connector, you might be able to also offload to the micro port, e.g., this:
http://www.shopfujitsu.com/store/SelectAccessoryDetail.jsp?partNumber=FPCCBL75AP&category=Cables%20and%20Adapters&title=MicroUSB%20to%20USB%20Conversion%20Adapter%28FPCCBL75AP%29&mkw=&pmt=&gclid=CjwKEAjwmZWvBRCCqrDK_8atgBUSJACnib3lEfI71BrTfG7ZnXkfSpRReN276zNcWc8yJ5TrEO5EERoCqOzw_wcB&cshift_ck=6d60c883-fa5c-4f6d-9738-cd65aaf1f0b5csXXQvxnjc

I plan on doing some micro port testing and development, but haven’t been able to check this cable yet (it’s on order, no telling when it will arrive).