I think the floating point encoding of LLMs is inherently lossy, add to that the way tokenization works. The LLMs I've worked with "ignore" bad spelling and correctly interpret misspelled words. I'm guessing that for spelling LLMs, you'd want tokenization at the character level, rather than a byte pair encoding.
You could probably train any recent LLM to be better than a human at spelling correction though, where "better" might be a vague combination of faster, cheaper, and acceptable loss of accuracy. Or maybe slightly more accurate.
(A lot of people hate on LLMs for not being perfect, I don't get it. LLMs are just a tool with their own set of trade offs, no need to get rabid either for or against them. Often, things just need to be "good enough". Maybe people on this forum have higher standards than average, and can not deal with the frustration of that cognitive dissonance)
You could probably train any recent LLM to be better than a human at spelling correction though, where "better" might be a vague combination of faster, cheaper, and acceptable loss of accuracy. Or maybe slightly more accurate.
(A lot of people hate on LLMs for not being perfect, I don't get it. LLMs are just a tool with their own set of trade offs, no need to get rabid either for or against them. Often, things just need to be "good enough". Maybe people on this forum have higher standards than average, and can not deal with the frustration of that cognitive dissonance)