Difficulty with Building RAG Agents with LLMs (Gradio)

@luziferangle @malley
Hey yall! Sorry about the delay; feel free to @ me directly as necessary.
A blanket recommendation is to try to replicate the RemoteRunnable’s usage in the frontend python server, and see if it still works if the chain were inserted completely as-is. For example, if the frontend server has the usage:

chain = some_prompt | remote_chain | StrOutputParser()

then the chain that is being deployed should just be the LLM.

I also feel like maybe invoke isn’t directly implemented for just the retriever component (didn’t confirm, just gut feeling). Its use in a chain uses the default call as part of the invocation passthrough. Perhaps you’ll have better luck if you bake the RemoteRunnable in a RunnableLambda?