GPU can not be used when operation the inference model by curl command from MAC

Hi, my issue is still not solved
The phenomenon of my issue was descried in this link:

https://devtalk.nvidia.com/default/topic/1065197/jetson-agx-xavier/gpu-can-not-be-used-when-the-java-calling-/post/5394329/#5394329

I have some questions:

After I input import tensorflow as tf, there are no output like these:

“successfully opened CUDA library libcublas.so.10.0 locally”
or shows tensorflow can open other cuda library sucessfully

And there is a output in commnad “successfully opened CUDA library libcublas.so.10.0 locally” when we running the python code in xavier, after this output appeared, the usage of GPU can reach to 100% soon, before this output appeared, the GPU also not work well just like we operated the model using curl command from other computer.
Thus I think is that when we use curl command to operate the inference model in Xavier, the tensorflow can not open CUDA library well thus the GPU can not be used?

and why these things happen?

The CUDA and CUdnn was installed automotive by jetson pack 4.2
is that we need to set some environment variable ?

Hi,

Ideally, it should work normally with JAVA.

Would you mind to share how we can reproduce this issue on our side, ex. source, setup, …
So we can pass this to our internal team for more comments.

Thanks.

Hi

what kind of documents the internal team want to use
It is sorry but the company do not allow share too much things

what kind of reason you think most likely?
for example, the environment variable setting ?

If the two programs are being accessed by different users, then perhaps the failing program’s user is not a member of group “video”. See “grep video /etc/group” to see who is a member of “video”.

@linuxdev

Yes, thank you so much
This issue has solved after I add the user-glassfish to the user group : video
The GPU can be used well

And depend on our test results, when we input the small-size parts image , xavier can work well
but if we throw the large-size parts image, it shows error happen, so I think the tuning is necessary for business use

In the case of smaller data throughput working, but more throughput failing, I’d consider starting a new thread and giving details on how everything is connected, e.g., what is going over USB, how many USB items share a single root HUB (“lsusb -t”), and for other connection types the same thing, e.g., ethernet. Also, making sure to mention if the Jetson was in performance mode during the issue.