Originally published at: Build Your First Human-in-the-Loop AI Agent with NVIDIA NIM | NVIDIA Technical Blog
AI agents powered by large language models (LLMs) help organizations streamline and reduce manual workloads. These agents use multilevel, iterative reasoning to analyze problems, devise solutions, and execute tasks with various tools. Unlike traditional chatbots, LLM-powered agents automate complex tasks by effectively understanding and processing information. To avoid potential risks in specific applications, maintaining human…
I was very excited to try out the models hosted on the Nvidia NIMs but the blog step by step walk thrught is not beginner friendly for example the state class for the StateGraph is not defined and it would take someone familiar with building agents with langgraph to know to also create a state for the agent.
Other than that, it’s a really nice post. I’m going to complete it’s implementation using the knowledge of langgraph and I’m looking forward to testing out the human in the loop component.
Hello, Thank you so much for trying this out. here is the link to the entire source code in jupyter notebook format GenerativeAIExamples/RAG/notebooks/langchain/NIM_tool_call_HumanInTheLoop_MultiAgents.ipynb at main · NVIDIA/GenerativeAIExamples in the notebook it has the langGraph state definiton and all other pieces that might be missed in the technical blog. Please note that the python environment build can be found here GenerativeAIExamples/RAG/notebooks at main · NVIDIA/GenerativeAIExamples. Hopefully it can help you get started. Please do ask more questions if you have any and we always welcome feedbacks.
Hi @zcharpy ,
This blog really seems helpful! I have tried the technical blog of AI Agent. I am currently trying to execute the final output when i select option 1 and confirm as y i am often been prompted with this error message could you please help, awaiting for your response
also as recommended in the github notebook text to imageis unable at the moment so i tried with other NIMs that support text to image but i am unable to use it
“https://ai.api.nvidia.com/v1/genai/briaai/bria-2.3”
that’s odd, could you add one more above line 18 and print the state , then copy and paste the result here for me to help you debug this?
@zcharpy : it seems to be working fine now and I am able to get the Content Creator part as output, however I am facing issues with Digital Artist, i see the NIM is not available in build.nvidia.com, could you suggest something on this please…