I have a HDMI to CSI bridge board with a Toshiba tc358743. I’ve been able to capture video now I’m working on capturing audio. The chip sends the audio out via i2s over 4 pins sck (bit clock) , wfs (word clock), sd (data), osck ( oversampling).
I’ve read you can configure the pins on the 40 pin header for this by modifying the dtb.
I’ve tried using the jetson-io.py tool but the problem is that it rebuilds the dtb from scratch. Since I’m using a custom dtb which has the configuration for the tc358743 compiled into it, that method doesn’t work.
I’ve modified the tegra194-p3668-all-p3509-0000-hdr40.dts in the source code as it seemed to have the configuration for he 40 pin header. Pins 12, 35, 38, 40 I modified like this.
I recompiled the dtb and loaded it onto the Xavier NX but some questions I have now are:
How do I know the pins on the 40 pin header are configured correctly?
In other words, how do I know the changes I made in the tegra194-p3668-all-p3509-0000-hdr40.dts were applied?
Since I just configured each pin for i2s5 but nothing further, how do I know which pin is the clock line versus the data line?
I’ve seen some people use i2s2 and i2s3 but when I used the jetson-io tool it used i2s5. Is there a difference between all of these?
Ensure that bridge board is sending I2S data by probing with oscilloscope. Once the bridge board is confirmed to be sending data, try below mixer settings and capture the data
I probed the bridge board with an Oscope but to my surprise I’m not getting any readings for the clock, data or word clock on the i2s headers. These should all be outputs. I would assume when I am capturing video these outputs would be sending data? Toshiba data sheet does not specify how these work exactly… only that they are outputs. The chip is a ball grid array so I have to assume the traces to the headers are wired up.
What is interesting is that I do have a signals from both the FS (35) and SCLK (12) from the Xavier. I was thinking these would be input pins since the ones from the chip/bridge board are output. Am I thinking about this incorrectly?
By the way, I’m using a separate system to build kernels, drivers and the device tree. Instead of building the overlay and then applying it to the dtb, can I include the dts somewhere in the source code so it gets built into the dtb? For example…the device tree portion for the bridge board uses a dtsi file that gets pulled into the dtb when built.
@atalambedu
Thanks for the info.
I took a look at the spec sheet for the chip and it says i2s “Support Master Clock mode only”. I’m thinking the chip needs to be the master and the Xavier would be the slave device on the i2s, but I’m not sure. At this point it doesn’t seem like it would make much difference as I’m not getting any signals from the i2s anyway.
FYI, Jetson I2S supports both master and slave mode. However, I undertsand that you are yet to get signals from bridge board. If your queries are answered for now, you can close this thread as solved
When I record I just get loud clicking/static noise. The device is outputting at a rate of 48000 and format of signed 24 bit big endian. When I try and use S24_BE for the format it tells me:
`arecord: set_params:1299: Sample format non available
Available formats:
There is no direct way for capturing 24 bit big endian data.
But before we try the indirect ways, could you try offline conversion of recorded S24_LE data (Note that S24_LE is signed 24-bit little-endian integer, packed into a 32-bit integer so that the 8 most significant bits are ignored) to 24 bit big endian and play once to ensure there is no other issue. apart from endian difference.
Please attach both waveforms for our reference as well
Thanks for the thoughts on this. It seems strange to me that most of the chips are sending the I2S data in a MSB/big endian format but the Jetson/arecord will only accept a LSB/little endian format. How are developers handling this?
Thanks for the suggestion of of using SOX for a conversion utility.
So, the problem that I think I’m having is that the I2S data is being transmitted as big endian but the only option with arecord or ffmpeg is to capture it as little endian.
What is produced is a .wav file that is labeled as little endian but has big endian in it.
When I used SOX to do a conversion to little endian I believe it gave me the same file. I’ve attached the two files here.
For fun I tried to convert the file to big endian format but it produces a non loading wav file. ( Just noticed after I uploaded it, the file doesn’t load in the media player)
So I’ve figured out what is going on. My source audio was encoded with AC3. When I use source audio that is PCM it is working.
I’d like to understand the following:
How I could use different audio encodings seamlessly e.g. it wouldn’t matter if the audio content was PCM or Dolby?
When the I2S comes from the source as big endian how is it converted to little endian format for .wav?
When you configure the audio settings with amixer is it persistent? This seems to be true as I’ve rebooted the Jetson and I didn’t have to make changes.
How I could use different audio encodings seamlessly e.g. it wouldn’t matter if the audio content was PCM or Dolby?
I2S interface is intended for PCM playback/capture. AC3 being a compressed format, we are unable to support it over I2S
When the I2S comes from the source as big endian how is it converted to little endian format for .wav?
I2S supported formats can be referred from here. AMX/ADX APE modules help alter the byte map if needed. Refer Usage and Examples section and AMX/ADX sections in link to find more information about the usage of these modules
When you configure the audio settings with amixer is it persistent? This seems to be true as I’ve rebooted the Jetson and I didn’t have to make changes.
Yes it is persistent as alsa controls are stored during shutdown and restored during reboot
Please initiate new forum query if there are other queries as this is already marked as solved