Hello,
we are currently working on a small project to analyze IT monitoring data using AI.
In the first approach, we decided to use the “palmyra-fin-70b-32k” model.
Although it is designed for financial data, our initial tests with the monitoring data are promising.
We want the AI to identify correlations. According to the model card, the model can handle 32,768 tokens.
For our use case, this equates to approximately 100 log lines, which is,
of course, far too few in a productive environment.
What options are there to increase the number of tokens?
What methods or techniques are available to allow the model
to process more data? The advantage of AI should be its ability to process and analyze large amounts of data.