Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Important caveat with some of the results: they are using better prompting techniques for Gemini vs GPT-4, including their top line result on MMLU (CoT@32 vs top-5). But, they do have better results on zero-shot prompting below, e.g., on HumanEval.


I do find it a bit dirty to use better prompt techniques and compare them in a chart like that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: