State of the art for something you can run on a Macbook air, but not state of the art for LLMs, or even open source. Yi 34B and Llama2 70B still beat it.
True but it's ahead of the competition when size is considered, which is why i really look forward to their 13B, 33B models etc. because if they are as potent who knows what leaps we'll take soon.
I remember running llama1 33B 8 months ago that as i remember was on Mistral 7B's level while other 7B models were a rambling mess.