How to run local llm with cuda 10.2 support

@aniolekx if you follow this thread, Jetson support appears to be in ollama dating back to Nano / CUDA 10.2:

You may need to compile it from source. If you face issue, please file issues against the upstream ollama repo who is maintaining the project.