A . in front of a file or directory name indicates that it is hidden. There is a setting in the file manager to show hidden files. From the command line:
I haven’t seen anything here that indicates the Python install is broken. It appears to me as if you haven’t told it where the new OpenCV library is located.
indicates that you installed it in the pyenv virtual environment. Because this is an absolute path (it starts with a / ) that means that you would ignore the Install to: information. You can look to see if it’s actually there.
You can see where Python looks for libraries/modules:
pyenv handles paths differently than other virtual environments I’ve used. You’ll have to read through GitHub - pyenv/pyenv: Simple Python version management to get a better understanding on how to work through the path issue. It’s hard to tell what the issue might be without the PYTHONPATH variable in this case.
@Kangalow , Sorry to bother you with bunch of questions. Since I only used Google Colab on window OS, I am very new to all of these things - linux, virtual environment, etc… so even very basic things like setting path makes me crazy.
After all, I read your comment repeatedly and understand like this - all libraries I installed or downloaded are located in various path separately, which makes every modules or libraries can’t find each other. So I deleted all (openCV, pyenv, torch) and going to reinstall them to gather things in same place and prevent another path problems.
I checked it from
python -c "import torch ; print(torch.__file)" returns
- I set PYTHONPATH as you said in bashrc,
- this is my bashrc and jtop at this moment.
- and when I check
sys.path setted PYTHONPATH comes first, so if I try to rebuild opencv python will visit this path and will be installed in
[‘’, ‘/usr/local/lib/python3.8/site-packages’, ‘/home/seo’, ‘/usr/lib/python38.zip’, ‘/usr/lib/python3.8’, ‘/usr/lib/python3.8/lib-dynload’, ‘/home/seo/.local/lib/python3.8/site-packages’, ‘/usr/local/lib/python3.8/dist-packages’, ‘/usr/lib/python3/dist-packages’, ‘/usr/lib/python3.8/dist-packages’]
Then I think I have to modify pip target location to
/usr/local/lib/python3.8/site-packages as well. I will refer this link if I understand correctly and this process is needed.
Please take a look once more and I really appreciate your effort to help me lol
The Jetson itself has the GPU available, but it is an integrated GPU (iGPU) directly connected to the memory controller. Most programs expect a discrete GPU (dGPU) on the PCI bus, such that PCI query functions can find it. The latter simply cannot find the GPU. Your GPU is there, and it is functioning and available, but the software using it simply is not looking for the iGPU correctly. I’m not really a CUDA person, this is more something @Kangalow is good with, and he is looking at the Python version to see if it is simply a case of the software looking at the wrong Python (there is more than one Python release on most systems) indirectly causing the failure. Similar for OpenCV, maybe you got the wrong version and it isn’t aware of the iGPU version.
The GPU not found would be expected if there is dGPU software present. The fact is that (A) the
nvgpu module is loaded (the kernel driver), and the GUI login is using the NVIDIA user space driver. That implies that CUDA or other GPU software will work if it knows the correct version to search for.
If you can verify (from what @Kangalow is asking) the Python information and OpenCV version, it might be possible to fix the situation just by changing how the software is being started/invoked. Just be careful to not install software which expects a dGPU on the PCI bus. CUDA and some other software which is available for install via JetPack is intended for use with the iGPU.
That’s an awful lot doing there. The script that builds OpenCV specifies the installation directory. PYTHONPATH is used afterwards to tell Python where it was installed.
I can’t really comment on much more than the OpenCV install here. Personally, I only work on one install issue at a time. I will note that NVIDIA has a large amount of learning resources with pre-built docker images for what you are attempting to do. PyTorch with GPU support and so on.
There’s Hello AI World: GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
And there’s Jetson-AI-Labs : https://www.jetson-ai-lab.com
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.