Building RAG Agents with LLMs

Hello~ I am a college student from Taiwan~ I have encountered some problem with deploying Gradio at the final step of the assessment. I am really confused about this situation and have no idea how to solve it . What should I do to solve this problem?
Thank you very much~

This is my problem:

It seems that the server is running successfully:

And I’ve tested (basic_chat) (retriever) (generator):

@vkudlay I need some help here~~~~

Hey @luziferangle . This looks really close. If you look into frontend/server_app.py, you’ll see how the remoterunnables actually get used in the frontend (and the decisions to make such simple assumptions about what you should actually deliver are reasonable and justified). You’ll probably notice that the retriever shouldn’t really operate on its own like in your local example…