How can the token count in "palmyra-fin-70b-32k" be increased?

Hello,

we are currently working on a small project to analyze IT monitoring data using AI.
In the first approach, we decided to use the “palmyra-fin-70b-32k” model.

Although it is designed for financial data, our initial tests with the monitoring data are promising.
We want the AI to identify correlations. According to the model card, the model can handle 32,768 tokens.
For our use case, this equates to approximately 100 log lines, which is,
of course, far too few in a productive environment.
What options are there to increase the number of tokens?
What methods or techniques are available to allow the model
to process more data? The advantage of AI should be its ability to process and analyze large amounts of data.

1 Like

Hi @michael.remy – unfortunately we don’t control the context length limits of the models provided. You can try and look for other models that have longer token lengths, like Phi-3 (phi-3-medium-128k-instruct Model by Microsoft | NVIDIA NIM ) or perform some kind of filtering before sending data to the LLM

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.