I have Jetson Nano that I bought in 2020 and since that have not find a good use for it.
However now with advancing of LLM I want to ask the audience if I can accomplish following use case: use gpu-accelerated for text to speech LLM based transcription service that runs locally.
I really do not care how long the transcription will take - if it takes 2 hours for 1h audio call with 2 people, I am OK with it. I am looking for quality output and it must be done locally at nano.
Can you point me to the right direction how can I accomplish this if this can be done?
Thanks