Installed JetPack 6.2.1 on my Orin Nano devkit super. System details as follows: system_info.txt (611 Bytes). All seems finds, just had to re-enable the CSI camera.
I want to run Object Detection with TensorRT 10.3+ without dockers or containers, but rather with building from source with C++ to minimize runtime latency. I was able to do this previously with the Hello AI World project, but that no longer functions after JetPack 6.0.
What is the working approach for Object Detection from source on JetPack 6.2.1 with C++?
Please do not mark this thread as solved. Not even close. Jetson-Inference or the Hello AI Project, does not work in JetPack 6.1, 6.2 or the latest 6.2.1. It only works in 6.0, which is missing the Super 1.7X capability.
All I’m asking is this… is any Object Detection method with TRT10.3+, without dockers/containers, currently working in JetPack 6.2.1?
I’d rather start with inferencing that works already so I can focus on my goal. My goal is to create C++ robotics projects with feedback control systems involving object detection. My goal is not to fix the Jetson-Inference project, which is the responsibility of the Jetson team. Can @dusty-nv comment on this?
I have JetPack 6.2.1, but Deepstream is not installed. I do not own a dedicated ubuntu linux machine, so I don’t use the SDK manager, which is called by the deepstream documentation
Can Deepstream be installed from the devkit command line? I want to build it from source.
It was be so much easier if Deepstream was already included in the JetPack.