Draft:Measuring Massive Multitask Language Understanding - Pro
Submission declined on 2 October 2024 by Tavantius (talk). This submission's references do not show that the subject qualifies for a Wikipedia article—that is, they do not show significant coverage (not just passing mentions) about the subject in published, reliable, secondary sources that are independent of the subject (see the guidelines on the notability of websites). Before any resubmission, additional references meeting these criteria should be added (see technical help and learn about mistakes to avoid when addressing this issue). If no additional references exist, the subject is not suitable for Wikipedia.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
In artificial intelligence, Measuring Massive Multitask Language Understanding - Pro (MMLU-Pro) is a benchmark for evaluating the capabilities of large language models.[1]
Benchmark
[edit]It consists of about 12,000 multiple-choice questions spanning 14 academic subjects including mathematics, physics, chemistry, law, engineering, psychology, and health. It is one of the most commonly used benchmarks for comparing the capabilities of large language models.
The MMLU-Pro was released by Yubo Wang and a team of researchers in 2024[2] and was designed to be more challenging than then-existing benchmarks such as Measuring Massive Multitask Language Understanding (MMLU) on which new language models were achieving better-than-human accuracy. At the time of the MMLU-Pro's release, most existing language models performed around the level of random chance (10%), with the best performing GPT-4o model achieving 72.6% accuracy.[2] The developers of the MMLU-Pro estimate that human domain-experts achieve around 90% accuracy.[2]
Organisation | LLM | MMLU-Pro |
---|---|---|
Anthropic | Claude 3.5 Sonnet[4] | 76.12 |
Gemini-1.5 Pro[5] | 75.8 | |
xAI | Grok-2[6] | 75.46 |
Rubik's AI | Nova-Pro[7] | 74.2 |
OpenAI | GPT-4o | 72.55 |
References
[edit]- ^ Roose, Kevin (15 April 2024). "A.I. Has a Measurement Problem". The New York Times.
- ^ a b c Wang, Yubo; Ma, Xueguang; Zhang, Ge; Ni, Yuansheng; Chandra, Abhranil; Guo, Shiguang; Ren, Weiming; Arulraj, Aaran; He, Xuan; Jiang, Ziyan; Li, Tianle; Ku, Max (2024). "Measuring Massive Multitask Language Understanding - Pro". arXiv:2406.01574 [cs.CL].
- ^ "MMLU-Pro Dataset". HuggingFace. 24 July 2024.
- ^ "Introducing Claude 3.5 Sonnet". www.anthropic.com.
- ^ "Gemini Pro". Google DeepMind. September 26, 2024.
- ^ "Grok-2 Beta Release". x.ai.
- ^ AI, Rubik's. "Nova Release - Introducing Our Latest Suite of LLMs". rubiks.ai.