So basically I want to run a google teachable machine model (.h5) file on the Jetson Nano(4gb) with a 32 gb sd card.
For this I’ve done the whole setup in kind of a noob way. My code included the use of the following packages which I’ve already installed.
Packages: Mediapipe, opencv, cvzone, tensorflow.
MEMORY PROBLEM: The problem I’m facing is that there’s no space left on the SD card. I’ve removed libreoffice and thunderbird and I don’t know what else I should remove for freeing up space.
SPEED PROBLEM: Also, when I run a sample image through my classification model using tensorflow, it took about 20 second to load up before it gave me the output. I read blogs stating that I could convert my keras-tensorflow model to a TensorRT model via ONNX but I’m unsure if that’ll work for me? Any suggestion on what I should do is appreciated.
If I use the docker for my project is there any simple way to convert my whole Python code to be optimised for the real time application using gstream, deepstream libraries and possibly using the cuda support? (I’m new I don’t know how to use the Jetson nano to the fullest, sorry)
Any and all help is appreciated!
Open to suggestions and criticism.
Thanks.
Thanks for the reply!
I think my Jetson is already running at max speed.
Also an external drive would help I thought about using it, could you tell me maybe which files and applications would not be required and would not affect the device’s working after they’re removed?
Also the code used for conversion of the model, do I run it on Jetson itself? Or should I maybe do it on another machine and they import it back here?
Thanks.
I’m currently using a 32 gb sd card would upgrading it to maybe a 64gb or a 128gb with 200mb/s make my model and Python code run faster with more fps for live feed capturing?
Expanding storage can give you more space to install packages.
But it won’t improve the inference time since read/write bandwidth remains similar.
It’s recommended to convert the model to TensorRT for improvement.
It can optimize GPU usage for inference based on the architecture.
Please run the conversion on Nano directly.