Running Meta Llama2 model locally - Window_10

Description

I downloaded All meta Llama2 models locally (I followed all the steps mentioned on Llama GitHub for the installation), when I tried to run the 7B model always I get “Distributed package doesn’t have NCCL built in”. Even I have Nvidia GeForce RTX 3090, cuda 11.8, pytorch 2.0.1+cu118 and NCCL 2.16.5.

Environment

Windows 10
Nvidia GeForce RTX 3090
Driver version 536.99
Cuda version 11.8
Python version 3.11

Relevant Files

Here you will find a


screenshot of the error.
Thank you in advance.

this screenshot shows the output of “Nvidia-smi” command.

this screenshot shows the output of “nvcc --version” command.

Capture6

this screenshot shows the output of “torch version and cuda availability”.

Hi,

This forum talks more about updates and issues related to the TensorRT.
We recommend that you please reach out to the relevant platform to get better help.

Thank you.