Building RAG Agents with LLMs assessment problems

Hi

@vkudlay, Sorry to bother you
i have encountered some problems when doing Assessment

%%writefile server_app.py

🦜️🏓 LangServe | 🦜️🔗 LangChain

from fastapi import FastAPI
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings
from langserve import add_routes

May be useful later

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.prompt_values import ChatPromptValue
from langchain_core.runnables import RunnableLambda, RunnableBranch, RunnablePassthrough
from langchain_core.runnables.passthrough import RunnableAssign
from langchain_community.document_transformers import LongContextReorder
from functools import partial
from operator import itemgetter

from langchain_community.vectorstores import FAISS

TODO: Make sure to pick your LLM and do your prompt engineering as necessary for the final assessment

embedder = NVIDIAEmbeddings(model=“nvidia/nv-embed-v1”, truncate=“END”)
instruct_llm = ChatNVIDIA(model=“meta/llama3-8b-instruct”)
llm = instruct_llm | StrOutputParser()

app = FastAPI(
title=“LangChain Server”,
version=“1.0”,
description=“A simple api server using Langchain’s Runnable interfaces”,
)

!tar xzvf docstore_index.tgz

docstore = FAISS.load_local(“docstore_index”, embedder, allow_dangerous_deserialization=True)
docs = list(docstore.docstore._dict.values())

#####################################################################
def docs2str(docs, title=“Document”):
“”“Useful utility for making chunks into context string. Optional, but useful”“”
out_str = “”
for doc in docs:
doc_name = getattr(doc, ‘metadata’, {}).get(‘Title’, title)
if doc_name: out_str += f"[Quote from {doc_name}] "
out_str += getattr(doc, ‘page_content’, str(doc)) + “\n”
return out_str

chat_prompt = ChatPromptTemplate.from_template(
“You are a document chatbot. Help the user as they ask questions about documents.”
" User messaged just asked you a question: {input}\n\n"
" The following information may be useful for your response: "
" Document Retrieval:\n{context}\n\n"
" (Answer only from retrieval. Only cite sources that are used. Make your response conversational)"
“\n\nUser Question: {input}”
)

def output_puller(inputs):
“”““Output generator. Useful if your chain returns a dictionary with key ‘output’””"
if isinstance(inputs, dict):
inputs = [inputs]
for token in inputs:
if token.get(‘output’):
yield token.get(‘output’)
#####################################################################

long_reorder = RunnableLambda(LongContextReorder().transform_documents) ## GIVEN
context_getter = itemgetter(‘input’) | docstore.as_retriever() | long_reorder | docs2str ## TODO
retrieval_chain = {‘input’ : (lambda x: x)} | RunnableAssign({‘context’ : context_getter})

Chain 2 Specs: retrieval_chain → generator_chain

→ {“output” : , …} → output_puller

generator_chain = chat_prompt | llm ## TODO
generator_chain = {‘output’ : generator_chain} | RunnableLambda(output_puller) ## GIVEN
#####################################################################

rag_chain = retrieval_chain | generator_chain

PRE-ASSESSMENT: Run as-is and see the basic chain in action

add_routes(
app,
instruct_llm,
path=“/basic_chat”,
)

ASSESSMENT TODO: Implement these components as appropriate

add_routes(
app,
generator_chain,
path=“/generator”,
)

add_routes(
app,
retrieval_chain,
path=“/retriever”,
)

Might be encountered if this were for a standalone python file…

if name == “main”:
import uvicorn
uvicorn.run(app, host=“0.0.0.0”, port=9012)

i dont know if it’s right, but i got error [Errno 111] Connection refused when i doing this

from langserve import RemoteRunnable
from langchain_core.output_parsers import StrOutputParser

llm = RemoteRunnable(“http://0.0.0.0:9012/basic_chat/”) | StrOutputParser()
for token in llm.stream(“Hello World! How is it going?”):
print(token, end=‘’)

from langserve import RemoteRunnable
from langchain_core.output_parsers import StrOutputParser

retriever = RemoteRunnable(“http://0.0.0.0:9012/retriever/”) | StrOutputParser()
for token in retriever.stream(“Tell me about RAG”):
print(token, end=‘’)

I also tried in notebook3.5, and I am pretty sure I have executed every cell but still don’t work.
and I also don’t know if this problem cause the following error

I have tried everything I can think of, but I can’t figure it out, hope someone can tell me

i think i figured it out and also passed the assessment

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.