I am encountering a problem on my NVIDIA Jetson Xavier NX when running a Python code that utilizes the Sentence Transformer library to do sentence encodings. The code works fine on Intel-based architectures like laptops, but I’m facing a “Segmentation fault (core dumped)” error on my Jetson Xavier NX.
I’ve tried different Sentence Transformer models, and the issue persists. I suspect it might be related to memory allocation or compatibility with the ARM architecture of the Jetson Xavier NX.
Here are my questions:
How can I adjust memory allocation or flags to ensure that my code has enough resources to run on the Jetson Xavier NX?
Are there any known compatibility issues between SentenceTransformer and AutoModel on ARM devices?
How can I access more detailed debugging information to pinpoint the cause of the segmentation fault?, Tried NCCL_DEBUG=INFO nothing prints on terminal.
I appreciate any insights or suggestions to help resolve this issue. Thank you for your assistance!
Thank you for your response and the suggestion to explore the DeepStream SDK.
While I appreciate the recommendation, our primary concern at the moment is addressing the segmentation fault error I’m encountering with my current software stack. This issue seems to be unrelated to the choice of software frameworks for deep learning inference.
I would like to kindly request input from the community regarding this specific segmentation error. Any insights or solutions related to this issue would be greatly appreciated.
Once again, thank you for your assistance, and I look forward to hearing from other forum members who may have encountered similar challenges.