Recently the folks at JetBrains published an excellent article where they compare the most important LLMs for developers.
They highlight the importance of 4 key parameters which are used in the comparison:
- Hallucination Rate. Where less is better!
- Speed. Measured in token per second.
- Context window size. In tokens, how much of your code it can have in memory.
- Coding Performance. Here it has several metrics to measure the quality of the produced code, such as HumanEval (Python), Chatbot Arena (polyglot) and Aider (polyglot.)
The article is great, but it does not provide a spreadsheet that anyone can update, and keep up to date. For that reason I decided to turn it into a Google Sheet, which I share for everyone here:
I have enabled comments, and anyone can download it. A link to the original article from JetBrains is inside the document too in the tab called sources.