It's how it was with the internet. I grew up in the 90s, and teacher didn't know how to deal with the fact we no longer had to go through multiple books in the library to get the information they needed. We barely needed to write it.
Now nobody expects students to not use the internet. Same here: teachers must accept that AI can and will write papers and answer questions / do homework. How you test student must be reinvented.
This is just about the worst possible response it seems. It manages to probably hurt some wrists not used to long handwriting sessions, completely avoid learning how to use and attribute AI responsibly, and still probably just results in kids handwriting AI generated slop, anyway.
It also disadvantages people with disabilities. How exactly are they supposed to do these papers and tests? Dictate everything to someone else, using Blindness as an example? Because that seems very very inefficient and extremely error-prone.
As someone with an actual visual impairment, please do not attempt to use my affliction to justify generalized use of AI. Educational assistance for those with disabilities is not a new thing; AI is likely going to have a role but how remains exactly to be seen.
As someone who myself is legally blind, I am in no way justifying the use of AI like this. I was responding to the entire "let's all go back to actual paper-based tests/assignments" trope that was being trotted out on here. Sure, it (might) work, but it also disadvantages people like us, since most teachers can't read braille (at least, none of mine could).
So, your whole point boils down to 'do it in person', so what's the point of handwriting it then? The whole handwriting thing is performative in either case, it's pointless. Why not specify that it be written in cuniform?
We've been writing with our hands for thousands of years. I suspect that on balance a Butlerian Jihad against AI slop would be perfectly fine for our hands.
There is an obvious reason why LLM use should be discouraged in classwork focused on writing: the process that's needed for a brain to learn the skills can't be outsourced.
The Internet is different. Even with access to websites like Wikipedia, you had to write your own content. Plagiarism was easily detectable.
We shouldn't confuse "we don't have a solution at the moment" with "we should completely abandon no-LLM education". Like with social media, we can always change the direction of progress.
When I was in high school, we were not allowed to use calculator for most science classes ... And certainly not for math class. I'm ten years, will you want to hire a student who is coming out of college without considerable experience and practice with AI?
LLMs work best when the user has considerable domain knowledge of their own that they can use to guide the LLM. I don't think it's impossible to develop that experience if you've only used LLMs, but it requires a very unusual level of personal discipline. I wouldn't bet on a random new grad having that. Whereas it's pretty easy to teach people to use LLMs.
Kids need to learn the fundamentals first and best. They can learn the tools near the end of school or even on the job.
I loved computer art and did as many technical art classes at university as I could. At the beginning of the program I was the fastest in the class, because we were given reference art to work from to learn the tools. By the end of the class I couldn't finish assignments because I wasn't creative enough to work from scratch. Ultimately I realized art wasn't my calling, despite some initial success.
Other kids blew me away with the speed of their creations. And how they could detach emotionally from any one piece, to move on to the next.
Yes, it is much easier to train someone to use AI than to train them to have sufficiently baked-in math and language skills to be able to leverage the AI.
Why would I want to hire such a student?
What makes him better the better pick than all the other students using AI or all the other non-students using AI?
Should I, by some miracle, be hiring, I'd be hiring those who come out of college with a solid education. As many have pointed out, AI is not immune to the "garbage in, garbage out" principle and it's education that enables the user to ask informed and precisely worded questions to the AI to get usable output instead of slop.
You overestimate the level of investment the average person can and will make for these freedoms. People buy Kindles because they work (and are heavily marketed), they buy Apple because they simply work, and will keep preferring windows to Linux until Linux offer a easier barrier to entry.
Microsoft will (almost already has) loose its advantage to Apple before it loses to Linux.
Which distro and shell you've tried? I believe this makes a lot of difference, and there are distros that are catering for average users, but I have no ideia if it works.
I love this article. It sums up everything I think it's wrong in our line of work.
Things have become very... Amateurish. Think of the way these apps got to where they are-who decided to put all those icons? Probably someone who hasn't a good understanding of usability but maybe - like someone with not enough domain knowledge - looked at other apps and thought that the icons look pretty, while having a small amount of understanding of their purpose.
Why is that happening? I have theories... Hypothesis. Maybe too many managerial types are calling the shots. Maybe we needed more workers than we where able to educate and the average skill dropped (a lot). Maybe companies realized that poor quality doesn't matter, because either customers don't have other choices or the choices that there are are as bad.
Good times. Although, I have to say, I was getting sick of SO before the LLM age. Modding felt a bit tyrannical, with a fourth of all my questions getting closed as off topic, and a lot of aggressive comments all around the site (do your homework, show proof, etc.)
Back when I was an active member (10k reputation), we had to rush to give answers to people, instead of angrily down voting questions and making snark comments.
reply