Running computer vision on jetson tx1

After starting up the tx1, I did sudo apt-get install nvidia-jetpack, which installed tensorrt and cudnn and all that which is great. Problem though is, it takes up legit every piece of storage and I need these things to run for computer vision. This renders it impossible to open anything (chromium, disk utility, etc). What would you suggest I do (besides obviously reflashing), as this device has only 15gb of storage. If your advice is to buy a storage card, could you please provide me links to a) where I can buy one, and b) how to “set it up” ie transfer or even flash/run off of the sd card.
Ideally however, I wouldn’t be buying a sd card though. I’m looking more so for a way that I can get the bare minimum to run either a yolo class model, mobilenet or opencv dnn (which all require either tensorflow or pytorch installed, which I have been having trouble with as well to get installed, since that aswell seems to talk up most of my storage when I use the dockerfile).
Edit: another question is that im curious if it’s even possible to run any yolo version on the tx1 since the person from the yolov8 repo said to ask here if it had the correct CUDA/cudnn Running YOLOv8 CLI/Running YOLOv8 in general · Issue #2156 · ultralytics/ultralytics · GitHub

Hello. I am here to provide an update.
I was able to do a substantial amount in the time since posting this.
I have managed to get MobileNetSSD running from this repo: GitHub - mm5631/live_object_detection: Live object detection using MobileNetSSD with OpenCV using a tensorflow dockerfile.
While this is “great”, it feels slow (which is probably because it’s probably running on cpu). This leads me to my “new” problems.

  1. Upon running the tensorflow dockerfile from here: NVIDIA L4T TensorFlow | NVIDIA NGC, I have to “take out” the --runtime nvidia part when “launching” it. This is because when I leave it in, I get the following error: docker: Error response from daemon: Unknown runtime specified nvidia.
  2. The model that I ran seems quite slow. I think it’s not using the GPU, since it’s not using nvidia runtime. (im getting about 5 fps)
  3. How could I optimize this model (preferably with tensorrt). If I could optimize it with tensorrt, how do I do that, and after doing that, how could I run this with camera using opencv/imutils?
  4. This “problem” will probably be more suited for the github repo owner for the live_object_detection, however, how could I custom train this model? Any resources on that would be helpful.

I have compiled a repo that contains what I need to reproduce everything, including a text file called instructions, which can be found here: https://github.com/frankvp11/mobilenet/blob/main.
Feel free to leave a comment on how I could do the things that I outlined.

Hi,

1.
Which JetPack are you using?
You will need a container that is built on the same major OS version. For example, using r32.7.1 on JetPack 4.6.x.

2.
The device can be boosted with the following command:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

3. You can use TensorRT.
Below is a tutorial for your reference. SSD MobileNet is one of the supported models.

4. The repo above also contains a training sample (without changing the model architecture).

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.