Chat RTX Training models and creatiing Agents

Hi Everyone,

The idea of being able to train your LLM with your data is incredible. However, the LLM models that are included are not working right. It does enough but all of a sudden weird characters, ask it to tell you certain things and it hallucinates. Is it memory or something else? As I understood it is creating some database. Maybe not

In any case here are some suggestions:

  1. Allow to add models that are also used on LM Studio or Ollama. It is a pain to download and looks like everything comes from Huggingface.com, rename them before training to ensure that they do not overlap in any way. so the books will be RAG and then sent to the right AGENTs
  2. Create a way to designate the models for a subject such as Food, then create agents or web scrapers that collect data or you add your books to the model on cooking or food, and it RAGs the books to create the chunks and from 200 cuisines in the book, it creates 200 agents that further implement subcategories
  3. Creating a feedback loop that includes human intervention CHEFs, for example, that can error correct or add more data with an interface. But specifically create a glossary, dictionary, or some hierarchy of the data to analyze and see what it contains and if it right or wrong to allow this to be fixed.

Right now it does not search the web, you cannot put YouTube videos. I have seen that it does but in a promotional video… but it’s not there.

This is the most promising besides a real working model like GPT-4

Appreciate this. Also if you open-source the code I will be willing to get it done.

Thanks

Javier
The Creator

I agree with all you said. But what is the definition of ‘Agents’ in this context? I’m not a newbie to chatbot content creation, but I am a newbie when it comes to Tensor-RAG context segment control, Tags and context indexing rules. As a alpha product, I understand the lack of documentation, but there should be a warning that LLM data sets have similar, but not the same syntax rules. Hence the need for Nvidia specific examples of “Mistral 7b Int4” data set syntax and formatting Rules.