how to : from 1PB cloud to 1Tb local

1P=1024 Tb
It appears that if to apply for GCP offerings like:

or Free Trial and Free Tier  |  Google Cloud [I tried only that one ]
or somewhat Taking You From Idea to Real Impact | MediaAgility
and then :

export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
     echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
     curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
     sudo apt-get update
     sudo apt-get install gcsfuse
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init

and then

gcloud auth login
gcloud auth application-default login
mkdir mount
gcloud config set project PROJECT ID
//that line will create a bucket in asia region, I used web interface of GCP instead//
// gsutil mb -c S -l ASIA gs://my_baquet_name
gcsfuse gs://bucket_name /path/to/mount

that will result in 1Pb mount attached
Now I am testing dd from whole disk to gs mount point.
It shows 11 Mb/s
for comparison with the initial 11 MB/s dd to nvme pci-e appears to be 3 times faster and displays 32- MB/s
Though it resulted in :
“closing output file” “input/output error”, possibly because there was no free space at the device by the time the task has ended. Will approach adding to jetson NVME PCI then

References:

well, the first approach failed I got a device as
nvme0n1 259:22 0 894.3G 0 disk
that I will be attempting to adjust to be used as a system drive instead of eMMC default one.
As per my understanding I have to dd the default device to the latter one and somehow adjust the disk drive entry and specify it in a way the boot will happen from the nvme01n device.
Hopefully forum has enough threads on the issue, as it seems to me:

https://devtalk.nvidia.com/default/topic/1015222/jetson-tx2/cloning-tx2-root-to-nvme-drive-part-3/
https://devtalk.nvidia.com/default/topic/1008824/jetson-tx2/installing-nvme-drive-/
https://devtalk.nvidia.com/default/topic/1025000/booting-from-nvme-on-tx2/
https://devtalk.nvidia.com/default/topic/1032016/jetson-tx2/booting-tx2-with-nvme-m-2-ssd-as-root-filesystem/

First, it seems that nvme support is somewhere enabled in the config:

gunzip -c /proc/config.gz |grep CONFIG_BLK_DEV_NVME
CONFIG_BLK_DEV_NVME=y

Second step to be approached seems to edit extlinux.cfg
existing configuration is:

cat /boot/extlinux/extlinux.conf
TIMEOUT 30
DEFAULT primary

MENU TITLE p2771-0000 eMMC boot options

LABEL primary
      MENU LABEL primary kernel
      LINUX /boot/Image
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4

then

sudo cp /boot/extlinux/extlinux.conf /boot/extlinux/extlinux.conf.old

then I will follow the suggestion by Honey_Patouceul and add to the file something like:

LABEL NVME
    MENU LABEL NVME kernel
    LINUX /boot/Image
    APPEND root=/dev/nvme0n1p1 rw rootwait console=ttyS0,115200n8 console=tty0 OS=l4t fbcon=map:0 net.ifnames=0 memtype=0 video=tegrafb no_console_suspend=1 earlycon=uart8250,mmio32,0x03100000 nvdumper_reserved=0x2772e0000 gpt tegraid=18.1.2.0.0 tegra_keep_boot_clocks maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.1.1 androidboot.serialno=0334916010240 bl_prof_dataptr=0x10000@0x277040000 sdhci_tegra.en_boot_part_access=1 root=/dev/nvme0n1p1 rw rootwait rootfstype=ext4

Though, perhaps I need to flash first the new tegra release to the nvme drive if not using the dd from the existing eMMC
I am just wondering if flash.sh will support the execution below

sudo ./flash.sh jetson-tx2 nvme0n1

should boot menu appear if I just changed the default extlinux.conf to the form below?:

cat /boot/extlinux/extlinux.conf
TIMEOUT 30
DEFAULT primary

MENU TITLE p2771-0000 eMMC boot options

LABEL primary
      MENU LABEL primary kernel
      LINUX /boot/Image
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4
LABEL secondary
      MENU LABEL SDCARD kernel
      LINUX /boot/Image
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4
LABEL NVME
    MENU LABEL NVME kernel
    LINUX /boot/Image
    APPEND root=/dev/nvme0n1p1 rw rootwait console=ttyS0,115200n8 console=tty0 OS=l4t fbcon=map:0 net.ifnames=0 memtype=0 video=tegrafb no_console_suspend=1 earlycon=uart8250,mmio32,0x03100000 nvdumper_reserved=0x2772e0000 gpt tegraid=18.1.2.0.0 tegra_keep_boot_clocks maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.1.1 androidboot.serialno=0334916010240 bl_prof_dataptr=0x10000@0x277040000 sdhci_tegra.en_boot_part_access=1 root=/dev/nvme0n1p1 rw rootwait rootfstype=ext4

In my case it doesn’t appear to appear.
Shall I execute something like the command below for the boot menu appearance?:

sudo ./flash.sh -k EBT -L /usr/lib/u-boot/p2371-2180/u-boot.bin jetson-tx1 mmcblk0p12

source: InstallingDebianOn/NVIDIA/Jetson-TX1 - Debian Wiki
It appears that I shall perform some intrinsic manipulations to get the boot menu choice.
Hopefully the issue seems to be explained to some extent at NVIDIA Tegra X2 | Compiling Tegra X2 Source Code | RidgeRun

yup
I did dd if=dev/mmcblk0 of=/dev/nvme0n1
it has created 29 partitions on the latter
and I changed the extlinux.conf to point to /dev/nvme0n1p1 instead of /dev/mmcblk0p1 , and that resulted in that
on boot is shows:
“waiting for root device /dev/nvme0n1p1”

Does your kernel have NVME disk support builtin ? Modules wouldn’t be available for at kernel boot time.

There is a somewhat subtle detail to point out here just for illustration. If you were to format a partition on your media from a PC which has 64-bit ext4 extensions, then you would get the same message because U-Boot itself would be trying to find extlinux.conf on a file system type it doesn’t understand (it is U-Boot which fails to understand 64-bit extensions…Linux itself is good with 64-bit).

Since the file system was copied directly from the Jetson with dd, then you know that this file system on the NVMe is ok so far as ext4 specs go. The NVMe support is almost certainly the issue, but I suspect the place where the support has to exist is in U-Boot instead of in the Linux kernel. There may be other issues as well, and I don’t know what the default U-Boot support is for NVMe, this seems to be the place to start (unfortunately I don’t know of anything analogous to “/proc/config.gz” of Linux for U-Boot).

got it.

Perhaps I do not even need a boot from nvme as I can move folders to it and mount folders like / or /home or usr or bin and whatever folders for execution from nvme.
Initial idea was the speed incerase because of use of nvme. But I doubt it will be a significant difference using internal emmc versus nvme ssd.

Upd, it turned out that 1PB cloud storage can be used but by some unknown reason it requires huge amount on a computer disk drive to redirect the dd file been written to the gcsfuse mounted folder, and if computer has limited disk space it will throw input output error, but if a computer has space the transfer will be completed.
Moreover some limitations on execution from the gscfuse mount folder seems to take place.
Local nvme seems more convenient for execution than cloud storage :P

I have never used those services, but I’m thinking perhaps it uses an extraordinarily large block size in the assumption of very large files. If you are curious and have such a device installed as a block device, then you might try “lsblk -t” or “lsblk -t -P” (you can also name the particular device to limit the query to just that device). Most ordinary old tech disk drives use a 512 byte block size, some 4096…most newer solid state drivers would say 4096.

lsblk -t

nvme0n1              0    512      0     512     512    0          1023  128    0B
└─nvme0n1p1          0    512      0     512     512    0          1023  128    0B
lsblk -t -P /dev/nvme0n1
NAME="nvme0n1" ALIGNMENT="0" MIN-IO="512" OPT-IO="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="" RQ-SIZE="1023" RA="128" WSAME="0B"
NAME="nvme0n1p1" ALIGNMENT="0" MIN-IO="512" OPT-IO="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="" RQ-SIZE="1023" RA="128" WSAME="0B"

That value appears to differ versus ones from mmcblk mostly in case of “RQ-SIZE” and “RA”

I am just wondering how to inspect and analyse the mount point made by gcsfuse:

gcsfuse archive-1 fuse
Using mount point: /home/user/fuse
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
mount -l
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
archive-1 on /home/user/fuse type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,default_permissions)

reference found : http://man7.org/linux/man-pages/man8/mount.fuse.8.html

I couldn’t tell you anything about gcsfuse. All I can do is speculate somewhat about the large amount of disk space used locally for your remote cloud storage. If you list all block devices used for a mount via “df -T” you’ll see file system type and mount points. All of the first column will be block devices, but only the “/dev” ones will be visible as a device special file you can do all of the normal block device commands on. The ones (such as tmpfs or fuse or gvfs) not showing from “/dev” will be virtual block devices. If the device is purely virtual, then there has to be some way of holding metadata other than on the device itself which is being mounted…I am thinking that perhaps all of the metadata of that PB or TB is being copied to the Jetson and that the metadata is itself quite large.

For a long time block size has always been 512 bytes due to physical construction of disks, and there is a certain amount of overhead/metadata for each block. If you have a lot of small files this might be a good size, but if you have larger files, then you are wasting space on metadata. Going to a 4k block size implies less overhead for people with larger average file size. Something similar could be said about ethernet where it normally has a 1500 byte MTU…but where jumbo frames can be used instead. In the case of a block device the metadata is per node. If it turns out that the Jetson is storing metadata for cloud services (meaning the Jetson is probably controlling the storage characteristics locally), then you get metadata building up locally on the Jetson for every node/block. More nodes/blocks, more metadata. Smaller nodes/blocks for the same total storage, more metadata.

If you have an option to control block size from the cloud services, and change it say from 512 bytes to 1048576 bytes (1MiB), then if the above is correct your Jetson could use the same storage with 512/1048576 == 0.00048828125 as much overhead (1MiB would consume 1 metadata unit, and there would be far fewer metadata units).

Does this cloud storage give you options on block size? What is the average size of file you’d be using on this cloud storage system?

thanks, linuxdev,
I have found at their manual that:
“Local storage: Objects that are new or modified will be stored in their entirety in a local temporary file until they are closed or synced. When working with large files, be sure you have enough local storage capacity for temporary copies of the files,”

source: Cloud Storage FUSE  |  Google Cloud

Therefore I will have to investigate the issue of recording of camera stream to online cloud storage in a live mode further to find if there are ways to record video let say 100 gb to the cloud having let say 16 gb disk drive at the device.

Finally they pointed me to the nfs based solution for processing of stream-recording directly to file storage: Mounting file shares on Compute Engine clients  |  Filestore  |  Google Cloud