Extracting EBT and DTB for blob file

Hello , I have a bricked Nvidia Shield 2015 which it is in APX mode, and I’ve been going through this article, and I’m stuck on the step where I have to extract firmware from blob file using hex editor if anyone could help me I would really apperciate it! Because I have never used Hex editor before

Here’s what he explains but I’m unable to understand it like I don’t know how to extract it?

Article by Yifan Lu - Unbricking SHIELD TV (2015) with a Bootrom Exploit | Yifan Lu

"‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’
Extracting the firmware​

Next, I needed the stock firmware to recover to. Luckily, NVIDIA provides stock recovery images for the SHIELD TV. I downloaded the 9.0.0 recovery image for my SHIELD TV (2015) and extracted the .zip file. Next, I had to extract two files from blob which contains the boot-loaders and other data. Opening the file in a hex editor, we see the name of the partitions in ASCII listed in order. By guessing and checking I discovered the structure for each entry is something like:

blob in hex editor

struct blob_entry {
char name[4];
char unknown[36];
uint32_t offset;
uint32_t length;
uint32_t unknown2;
};

I only care about two entries: EBT (the cboot boot-loader used by tegrarcm) and DTB (the partition I corrupted) so there was no need to write a script. I hand-extracted EBT as shield-9.0.0-cboot.bin and both DTB as shield-9.0.0-dtb1.bin and shield-9.0.0-dtb2.bin.
"‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’‘’

I’ve never touched one of these, and am only going to add some information which is perhaps useful, but definitely will require others to help with. Keep in mind that much of what you are asking about hex editors is not specific to this hardware; hex editors and device trees (so far as procedures go) is more or less uniform across a lot of embedded devices (the actual data is what the device tree customizes, and this is the “dtb”, a device tree in binary format).

A single byte is 8 bits. A hex digit goes from “0” to “f” (or “F”), and has 16 values from 0 to 15 in ordinary base 10 numbering (that’s 4 bits). Thus two hex digits show one byte (8 bits). Note how two digits are paired, and then a space between them? That’s showing bytes when you refer to something from “00” hex to “ff” hex.

Sometimes the order of bytes is interpreted differently depending on some convention or architecture. If you see a reference to “little endian” versus “big endian”, then you are seeing a reference as to whether a number which has more than one byte in it builds from smaller digit on left to right, or the reverse of that. More later.

For a given byte, if that byte happens to be listed in the standard ASCII character set of common letters/numbers/punctuation, then the part on the right tells you what that translation is. This implies a single byte and no interpretation of surrounding bytes being part of anything larger.

Consider the word “NVIDIA” in all capital letters. The hex table of those characters from ASCII is (the leading “0x” is a traditional way to say this is hexadecimal if context is otherwise not known; a good example table of ASCII is this):
N 0x4e V 0x56 I 0x49 D 0x44 I 0x49 A 0x41
…and just as hex:
4e 56 49 44 49 41

Going back to the topic of big- or little- endian, the word “NVIDIA” consists of only single byte characters (two hex characters, one byte; the N and V might mean something, but they are separate values). A longer number though, to a computer, might take many bytes to represent. There are 16-bit integers, 32-bit integers, 64-bit integers, the floating point variant, signed versus unsigned variants, so on. All of these “bigger than one byte” values take many bytes, and the order of writing down those bytes differs on different hardware (there are two choices: most significant byte sent first, or least significant byte sent first when communicated). As an example, if we ignore computer versions like “int” and “float”, we might have the number “123”. If both parties know the most significant byte is sent first, then communications via a series of digits would make our buffer or scratch pad at the other end look like:
1 2 3
(the 100s digit sent first)

However, if we agree to send the least significant digit first, then our buffer or scratch pad would have this written on it:
3 2 1

Both represent the same number 123. Your hex editor won’t distinguish between byte orders in the hexadecimal part. Some hex editors have more options related to how to construct combinations of bytes to become integers, floating point, so on, but often you won’t find any options for that. If such an option exists, then it will show up in the right side of your editor; the left side will still show as hex bytes, although it might indicate how bytes are associated, e.g., via boldface or underlining. The option which you will see most often is for big endian or little endian.

If you happen to know a number represented by two or more bytes (two hex digits at least twice), then you can reconstruct the order to figure it out. There is a requirement to convert back and forth between hex/binary/base10, so on, but an oversimplification is that if you have the number “0x12ab34cd”, and you see this, then it is little endian:
cd 34 ab 12

Note in this latter that individual bytes stay in order. However, the bytes themselves are transmittled such that the small/least significant byte is the last byte sent. That’s little endian. If you instead saw this same number represented like this:
12 ab 34 cd
…then this is big endian; the last byte sent is the most significant. This example is a 4-byte word. The same patterns hold true for two-byte words, 16-byte words, so on. Sometimes you will also run into a “bit order” convention, but in the hex editor, it is going to just be a byte order convention.

Often the “endianness” in an EEPROM will be independent of what the actual computer architecture is, and there is a conversion before “feeding” the CPU with a particular “endianness”. For your EEPROM you will need to poke bytes to match the endianness of that EEPROM’s convention. I don’t know what that will be, but if there is a firmware file which is intended for that EEPROM, then likely it is correct. If you can, be sure to first save a copy/dump of the original EEPROM before starting.

The fact that the underlying system is an NVIDIA Tegra chip probably does not change much on how the EEPROM is accessed or programmed. The actual data will matter, but the mechanics of reading and writing the EEPROM won’t really care.

In that “struct blob_entry” the uint32_t will be a 32-bit (4 bytes because it is 8 bits per byte) might be big endian or little endian. If you find just one case where you know what that value is, and you can examine the bytes, then you can reconstruct if this is big or little endian, and this will apply to the entire EEPROM. If the number you “know” is already written in hexadecimal, then it is easy. If it is written in ordinary base-10, then you’d have to convert to hex and look at hex pairs to see the order. Just to emphasize, if you know the correct “endianness” convention for one number in that binary file, then it would be unusual for anything else in that file to not be the same endianness.

Anything written in the table as single byte entries won’t care about byte order. I’m not sure what an “array” of characters will do. Your struct blob_entry has some character arrays. I don’t know if the there is any endianness associated with that, but each byte is still just a byte; it is only a question of whether “byte 0” precedes “byte 1” precedes “byte 2”, so on, in the EEPROM, or the other way around. If the entry is supposed to be ASCII plain text, then it is likely something like “name” will be obvious in the right side of your editor (e.g., the name “endr” might show as “e n d r”, or the reverse, “r d n e”; just watch in character arrays). The file which is to be programmed in likely has this all correct anyway.

This particular forum is more for Jetsons and does not really cover other Tegra SoC hardware. There is a lot in common, but mostly people won’t know about the NVIDIA Shield here. I don’t know which forum you would go to, if any, for this information.

Hey man, thank you for the amazing explanation, have you used Tegra Flash before, if so, can you tell me about an error I’m getting?

I couldn’t tell you anything about the specific content you are working on. Someone else would have to answer that, but this forum probably has nobody that can help with a Shield TV (wrong forum; and I’m unsure which forum would be correct).

I suspect this is also true for a Shield TV, but I don’t really know: Binary partitions used for boot on a Jetson are always signed. The developer kits default to a NULL key for that signature. When reading binary partitions used in boot on a Jetson you will get both the content and the signature whenever the dd tool is used for extracting them. Any use of dd to put content back in after editing will fail to boot if the signature is not updated. If Shield TV does this, then you would need to sign content before putting it back in place.

An interesting thought on this: If you have a signed partition, and it has been edited or otherwise corrupted, then putting only those edited/corrupted bytes back to what they were supposed to be would also make the signature valid. Sorry, I have no ability to help with most of what a Shield TV is, I’m just comparing to Jetsons.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.