Hi developers, check this out: Artificial Analysis has released a new Openness Index, which establishes a framework for evaluating models based on two critical, measurable dimensions: Model Availability:
Can we actually use and modify it freely (license, weights access)?
Model Transparency: Can we audit and reproduce its creation (data, methodology, code)?
What are your initial thoughts?
Do these two pillars—Availability and Transparency—cover the most important aspects of model openness for the research community?
Check out the index and how the top models score: https://artificialanalysis.ai/articles/announcing-artificial-analysis-openness-index