Dear Nvidia team,
I’ve been super excited about using my nvidia products. On my primary dev box i’m running 2 2080ti founders edition gpus, and now i’m working with an nvidia jetson nano.
I haven’t got much worth out of any of them, because every time i try to do something locally i spend more time setting up my eco system than working on AI/ML.
Lets take for example the jetson nano. This is nvidia hardware, with a flash drive from nvidia. Why do i spend hours installing nvidia items.
Whats more annoying is i can’t port code from machine to machine. I got yolov5 running on my desktop, now i want to get it running on a live feed on my jetson. 4 hours into it, and i’m still failing to load libudart.so.10.0 when i call pytorch.
I don’t want to deal with dockers or vms or set env, etc…
i want to control my packages, and whats runnning. I just want to be able to load a generic python script and run it.
It can’t be that hard.
Who runs nvidia ecosystem? They need be fired.
I would also work with google to get tensorflow sorted out. Its amazing to me how some package dependency in numpy can bust an entire AI workflow.
What happened to microservices, and microcomponents… things should be forward and backward compatible.
Linux makes this so easy with symbolic links!
Well thats my rant… but seriously, you guys offer so much joy and disappointment at the same time.