Bug when installing JetPack 4.2 on Nano

I checked the logs and I suspect it’s related to unattended upgrades running on the nano in the background.

Edit: I retried later, it went farther, then failed. I click on the export log button and it opens a Folder. I am pretty sure sdkml3_jetpack_l4t_42.json is the mentioned log but it’s really not clear.

Apparently, we share the same problem. Unattended updates fail.

Someone at nVidia should review the changes that have been made lately.

Hi, could you send me a link to an “img” that works for you from the first boot doing a clean installation?

I’m pretty tired, I still haven’t been able to release my Jetson Nano with the amount of problems I’m having, and I have a lot of projects in my head that I can’t get out of.

I’m about to buy the Google Coral and sell the Jetson Nano on eBay.

I used this one. I extracted the .img from the .zip and wrote it to the sd card with gnome-disks. Then in put it in the Nano and turned it on without the network to get past the setup, then rebooted and updated manually.

Yikes. Don’t do that. Google will unceremoniously kill the product. Remeber Android Things? Project Brillo? Exactly. Nobody does and it was this exact sort of thing. Those people who relied on that tech are SOL. Cuda is going to be around. Nvidia is going to stick around. Google is the brilliant kid in the class with adhd who can figure anything out but can never manage to finish anything either. The googler(s) behind supporting that board will be ‘ooh, squirrel’ before you can write a new chat client. Nvidia has already promised to support Nano long term. The support here is good as well.

Hi mdegans and Glocke,

When you export log file and it opens a folder, the log file is automatically saved into that directory. It is a zip file with a name like SDKM_logs*.zip.
Can you please attach the log files?

I’m about to get a Google Coral USB, for the price it has, is an option to consider (69€). The bad part is that I think it only works faster than Jetson Nano with the version of TensorFlow lite, although with a trick and a previous conversion, I think to (Int8) of the weights.

The good thing is that it’s Google and TensorFlow. I don’t think it leaves the development of its TPU hanging, but it is Google and its steps are very slow. I think about voice recognition models compatible with Google Assistant, Android, Google Cloud Platform, etc…

Google is a standard and I think that’s your guarantee.

I use Google’s Speech to Text API on a regular basis. It’s great. I wrote a script last weekend to transcribe director’s commentary for my hearing impaired spouse.

It also costs $10 to recognize a few hours in the highest quality (what you get from the assistant is low quality). It makes Google buckets of money because nobody else can do it nearly as well. Unless Google has explicitly said the voice model will be available, I wouldn’t bet on it being able to run at the edge other than maybe on Android.

I don’t know if you’ve seen Google’s WaveNet? I saw them a few weeks ago and the truth is that the naturalness of Text to Speech is amazing. Too bad it’s not in Spanish (my native language)

Also Translatotron, I also saw a model with a few personal phrases in learning mode, recreating the rest of the “range” of vocabulary for a personified and adapted Text-to-Speech conversion. Then there is also IBM’s Watson, with the ability to distinguish and separate several voices, even with background noise, such as the Fourier Transform.

Of nVidia I have not seen anything creative at the moment, only implementation Hardware and Demos own quite conceptual a freehand drawing take a realistic Photography), generators of photorealistic faces, etc … Their real business are the Players of Games and the FPS, let’s not deceive ourselves.

When I mentioned Google before, I meant things like that. I hope, and believe, that everything will become an Ecosystem, relatively affordable and within everyone’s reach. I think the nVidia Hardware follows in the footsteps of Google Software. Let’s see in what remains Google’s attempt to get its own line of TPUs, let’s not forget that Cloud Computer and Deep Learning have a lot of experience.

In total, the Google Coral TPU, are 69€, and to have fun experimenting, is very affordable and accessible. Although the Tensorflow Lite, does not just convince me with the tricks it uses, although FP8, is a good balance between Inference in the Edge and Speed.

Nice to meet you

So, Google’s implementation of WaveNet is not public. They have described how it works in papers and other people have copied that, but the actual model that does the business of recognizing high quality speech (the way google does with their long running jobs on the Speech Api) is a closely guarded thing.

It’s a market differentiator that makes Google’s Cloud service unique. Do you want high quality transcription services at speeds no possible human can provide? Well, they have an api for that and bindings for every programming language you can imagine.

You send it an audio file with a set of options, in chunks or all at once, and you get back words with timing data, probabilities, a list of alternative possibilities, speaker id, and so on and so forth. All of it runs as fast as you want it to becuase you can recognize unlimited files in parallel if you want to.

If they let you run anything on a local device, it’s going to be a cut down model designed to recognize voice at a lower quality, but be useful enough that it works enough of the time as an assistant without requiring sending tons of data to the cloud.

However… That doesn’t mean you can’t run WaveNet on Nano locally. You will just have to use one of the many open implementations and the end result is not going to be as good because, well, Google has access to unlimited training data and computing resources and you (presumably) don’t.

I’m not familir with Watson, but I suspect IBM operates similarly to Google.

If only it were just game players, 1080s would’t cost what they did last year. Unfortunately there are idiots who decided it was a good idea to waste elecricyity by pointlessly hashing things in order to buy drugs online. Nvidia has a lot of customers and CUDA is very flexible. You can run TensorFlow on it and maybe it won’t be as fast as a TPU, but if you really need the speed, you can plug a TPU into the nano.

The thing is, generally, it’s more effecient for computing resources to be shared. If you buy a piece of hardware and it’s twiddling it’s thumbs locally, that’s wasteful. I have a friend who does a lot of research type work. Occasionally he needs to spin up a cluster, and he can just do that from a netbook with less ram than your spartphone probably has. He doesn’t have to buy the hardware or pay for it’s contiued use and he has essentially unlimited resources as his disposal. I don’t use the cloud as much since I have more computing resources locally and they do get continued use, but for people who’s need for speed only occurs periodically, it makes a lot of sense to save cost by not buying things.

You too. The Coral sounds ideal for many of my purposes. I will probably pick one up. Thanks for pointing it out to me. I was aware of Intel’s compute sticks but didn’t realize Google had let their TPUs out the door.