I was helping my son with his homework. He had to write an essay about why the gender of the protagonist might be female (although this is never mentioned in the short story). Fortunately for him, ChatGPT knew the story and was happy to write an essay with arguments. [Btw, according to him pretty much everyone in class routinely has the AI do their essays]
He was about to send this essay in. But as we were admiring the prose of the computer (which is much better than either of can), we found out that some of the arguments actually didn't make much sense. So we went to through a rewrite exercise like in the article, improving the essay and our understanding of the issues.
Next time I see her, I'll urge the teacher to adopt a similar approach as in the article.
I am delighted to hear that children are already adopting the new tech. Sure there will be kinks for a while where no one bothers with the reading or writing assignments. But shortly that will smooth out as everyone gets on even ground.
Writing those essays will become trivial and irrelevant and be replaced with more interesting tasks with the assistance of AI.
I always think of Star Trek; instead of long-division math questions they will learn skills like “computer, if the trajectory of this rouge planet is reversed and we assume that is was ejected after some massive explosion, calculate the possible causes based on previously observed similar events and report the top candidate cause and confidence interval”.
An example from my academic field: instead of biased, unreproducible research papers we could publish: [1] raw data, [2] hypothesis context, [3] model, [4] seed. The output result can be read based on your own AI chatbot preferences but the conclusion is consistent and reproducible.
But surely the point of essay writing is to learn how to structure your thoughts into a coherent argument? If students aren't doing this, how will they learn this skill?
Or just learning how to write - most people simply can't write well. It's great that people are learning how to carefully craft a Google search query to give them the results they want, but they'll never be able to write a vulnerable love letter to their wife.
The point then is that they won't have to anymore.
However, I also think that you learn by imitation. If you have a text assignment, and that you make a sloppy job, and get a bad grade, what did you learn? But if you do the same text assignment, and improve it by iterating several prompts, then at least you learn something.
Also, learning requires motivation. It's futile to compare learning methods if there is no motivation (e.g; we assume students want to cheat). If instead we assume motivation, and we lower the bar required for learning, then these students will learn more with AI than without.
The only point I'm worried about is about imagination. For instance, it's easy to ask AI for example of names, stories, etc. But there is also benefit in spending time searching these things yourself. Not sure what the tradeoff will be there.
People won't be required to write anymore? Are you high?
The problem with essays is that what they're asking people to do aren't actually what we want people to be able to do. Sure, you can get a chat bot to write out some arguments about why a character in a story might be female. But you can't get it to, say, write your own experiences, or write your own novel thoughts about something, or basically anything which the AI isn't already familiar with.
AI as a spell checker and maybe filler generator works. AI as an essay generator works. AI as a replacement for the stuff you'll actually want to use your writing skills for past school doesn't work.
Good point; the post even reinforces how the students couldn't use AI to do the whole assignment because in addition to writing the essay with ChatGPT, they also had to "write a reflection at the end on how the AI did". The AI has no meaningful idea of how it did, or how to reflect on a student's subjective experience using it.
> The point then is that they won't have to anymore.
They'll certainly still need to use rhetoric and argumentation throughout their lives. That's the point about writing essays as a pedagogical tool; it's not about learning how to write a 3x5 essay, but rather how to construct the arguments that go into that essay.
A decade from now, what happens when one of these students who coasted through school using AI-generated essays is asked to argue on-the-fly, like in a work meeting? What if it's for something high stakes, that could impact their employment or future job trajectory at theory company? You need to be able to develop an argument with logical, supporting evidence, and articulate the complete argument in a compelling, persuasive manner -- oftentimes, on-the-fly, out-loud.
It sounds pastiche to argue with a student that "you won't always have an AI to help you" but in contrast the with a calculator, there are truly scenarios where you definitely _will not_ be able to take the time to reach for an AI crutch to build your argument for you.
> If you have a text assignment...and get a bad grade, what did you learn?
Ideally you learned what mistakes you made and how to avoid them in the future. The reliability of this process will depend a lot on the teacher, but I don't see how putting AI in the mix really changes this. You will surely need to learn /something/ about writing before you could even begin to evaluate the quality of a ChatGPT essay.
In debate class. I’m not convinced that essay writing has ever been a good way of learning how to structure thoughts into a coherent argument. I think essays (especially the terrible 5-paragraph essay) exist mostly because they are easy to grade.
Essays are so static. I think a debate format where you are immediately challenged on your arguments or asked to elaborate would give a much better idea of how well the student has mastered the material.
I know it's easy to get caught up in a few sustained decades of peace and progress (for those that have had it) and imagine that we can always rely on the tools that make our lives easy today, but it's actually quite important that people have foundational skills.
This is cheating and while it might not be picked up in a high school essay, it will certainly be flagged for plagiarism at almost all Universities. It's very dangerous to do this unless your course specifically asks you to answer using AI tools, like the course in the OP.
When you say "it will be flagged" do you mean that it would be considered cheating under existing academic dishonesty policies? or do you mean that the plagiarism-detection tools currently in use at universities can reliably detect the output of ChatGPT, new Bing, etc. that were only released publicly in the last few months
If you're just worried about the pragmatics of being caught, I'd keep in mind that ChatGPT doesn't actually have a very high "dynamic range" given some specific set of inputs. If you use it enough, or explore the same topics and prompts enough, you'll quickly start to see structures, words, phrases, presentations that its output gravitates towards.
These may not be enough of a watermark to generally detect what was written by an AI and what wasn't yet, but it does suggest that the proliferation of AI-written assignment essays will quickly produce a catalog of plagiarism sources that works with existing tools.
In other words, once people submit enough AI-written essays about popular assignment A, existing plagiarism detection tools will easily start to notice the phrases/sequences/structures that it statistically gravitates towards (because that's what it does by design!) and start detecting these in later work. That software won't know what's AI written or not, but it will think that your submission may be plagiarized because a lot of it looks pretty specifically familiar.
Going a bit further... the AI tooling will also be used elsewhere, in things that not-just-students read. So writing style will potentially drift/evolve to be like AI writers, and this will muddy the detection efforts. I know I've noticed numerous times how I pick up on and start using phrasing (and other tells automatically), matching what others do over the years.
Of course, I wouldn't be particularly surprised if the above effect doesn't manifest. It's very dependent on how much AI writing spreads across numerous areas, and whether kids-these-days do any measurable amounts of long-form reading.
I imagine schools will have to put minimal value in out of person assignments, relying on proctored exams for final grades. This might be an investment opportunity!
Plagiarism and copyright infringement have some overlap but they're distinct. You can be guilty of plagiarism by taking something in the public domain and presenting it as your own work.
It's not settled law yet. Many jurisdictions seem inclined to assign no copyright at all to AI-generated works, like the United States and the European Union.
For a reasonable interpretation, I offer this (in the context of programming): "The course recognizes that interactions with classmates and others can facilitate mastery of the course’s material. However, there remains a line between enlisting the help of another and submitting the work of another. [--] Working with (and even paying) a tutor to help you with the course, provided the tutor does not do your work for you." https://cs50.harvard.edu/x/2023/honesty/
Having access to people who can sit down with you and explain things to you when you get stuck, or who can point out mistakes in your existing understanding or your work, can make a huge difference in the outcomes of students.
If AI gets good enough to fill that role, everyone can have that access (or at least something close enough).
Why would you expect the AI to not be a paid service like a human tutor is? ChatGPT has a $0 tier, but it's also in a publicity phase, and they already have a "pro" subscription tier. Microsoft/GitHub likewise are charging for extended access to Copilot, and so is Stability with their Dream Studio application.
It'll be cheaper for the school, the parents or the social services to provide you access to an AI tutor than to more human instruction. Already, learning games can prevent or cure (some) learning disabilities via engagement and repetition in a cost-effective way.
It can already do super useful stuff, basically google search, gather results, summarize, (hallucinate, ) which makes it faster and sometimes easier to do this kind of research. Let's be honest, not everyone is so good at reading comprehension, it's an important thing taught and tested in school after all. ChatGPT can basically help those lacking in reading comprehension and research skills and create summaries for them.
You can ask ChatGPT why your code is not working right, and often it will give a helpful suggestion. Sure, the suggestion may be wrong or misleading, but it can help you get unstuck in any case.
That's a huge distance away from it being a tutor. A tutor would have a plan for how to educate you, and would consistently choose examples and problems to present in order to demonstrate the knowledge they are seeking to impart. It's not about giving you little hints, it's essentially the very opposite.
In my mind, what you are describing is the responsibility of the teacher. I was thinking more along the lines of how parents help their children in their homework. Like the grandparent wrote:
> who can sit down with you and explain things to you when you get stuck, or who can point out mistakes in your existing understanding or your work
That being said, I don't see why what you describe would require AGI. It won't be an excellent teacher or tutor in those tasks, but it may be good enough and make up in the price, availability and repetition.
If you wrote your child's essay that is cheating. If the child cannot formulate the arguments themselves, I'm not sure what the point of the assignment is.
1. I'm not going to take the moral highroad here. He's clearly going for an IT education. I think it's great that he is exploring these options.
2. He involved me, and we're making this a great learning experience.
3. Face the facts. This goes really fast. If you don't use an AI, your essay is going to be at the bottom of the heap. I'd estimate that out of the 35 essays, maybe 25 used AI for assistance.
I don't know why you want to hobble your child. He will be generating essays but one day he will be in a situation he cannot generate an essay, he won't be able to google, like say a presentation to a client or colleagues, and he will be caught, whereas his competition who do not need a crutch to write (or for a presentation, say make arguments on his feet) will not. You certainly don't need to overwork your kid, but why even bother educating your child if this is how you approach education? May be he needs to legally finish high school, but after HS why not let him chatGPT his way into a job if you think it is sufficient?
>your essay is going to be at the bottom of the heap. I'd estimate that out of the 35 essays, maybe 25 used AI for assistance.
This is absolutely a terrible attitude. Who cares about the "top essay" if they're all AI generated. None of the 25 students are good students, and unless they will fail into a good job at daddy's hedgefund, you will simply not become a proficient adult letting chatGPT do your homework for you. The entire point of writing essays is to formulate opinions and thoughts and defend them. I understand school sucks and sometimes you have to write essays for things you don't care about, and that is a problem. But, at some point, you need to learn how to write if you want to be a professional. Not doing so just means you're holding yourself back for reasons I don't quite understand.
i agree. I think this is analogous to flying a modern airliner. After all, anyone can set the autopilot. Easy, right? Easy until a situation arises where manual flying or human intervention is required. That is when years of tedious training becomes instantly relevant, when the machine can't handle an edge case that requires a novel solution.
Can you imagine a medical doctor or an airline pilot who chatgpted his/her way into a license?
Bottom of the heap? Ai essays suck. They’re painful to read for the most part. The only reason they look good is because the average person has undeveloped writing and critical thinking skills, so that comparatively they sometimes look okay. Refusing to learn to write is the opposite of the right approach here
When I started secondary school in the UK, the home economics teacher (i.e. cooking and sewing) in the first lesson had a rant about how she absolutely did not want to see anyone write up their cooking with the phrase "I think it tasted quite nice" because it was a generic and content-free cliché.
That's the standard ChatGPT has to beat to make you look good at school.
Likewise, the average comments section is all you have to beat to seem erudite as an adult.
>Likewise, the average comments section is all you have to beat to seem erudite as an adult.
I hope you do not seriously think "comment sections" is how you measure adult scale intelligence. I admit many people are not as intelligent as they should be but that is in fact a source of many of life's ills, and if you care about your life and your society you should want better than "beating the standard."
The power of writing is that it is the way we formulate thoughts, opinions, and arguments. Language is the key way people frame their thoughts, and writing is one significant way to develop one's language skills. Writing is more than syntax, grammar, spelling and word choice--that's why you're not considered "unintelligent" for using a thesaurus or spell-check.
Anyhow, my entire point is "beating the comment section" isn't valuable as a threshold because as you hint at, it isn't really a place to find intelligent discussion.
None of those things were part of what they taught me at the mandatory school lessons, only at the entirely optional after-school "convincing communication" 45 minute session they had one time.
Actual school was basically "prove you actually read this Shakespeare play we assigned to you by mentioning some of these standard points".
As for comments sections… that they're low quality is the reason I chose them as an example, beating them necessarily improves the quality of global discourse - the people writing them don't recognise their poor quality.
I don't think this is accurate at all.
Our sales team is using ChatGPT and others to rewrite sales copy to make it more interesting and increase engagement.
If the user is a capable writer who can organize their thoughts and recognize the difference between good and bad copy, they're going to get much more coherent results from tools like ChatGPT. I'd contend that an average primary/secondary-schooler is basically expected to develop those exact skills, without which their AI generated essays will have a multitude of problems just like human-composed ones do.
> 1. I'm not going to take the moral highroad here. He's clearly going for an IT education. I think it's great that he is exploring these options.
Every developed country has a general path to education because you want well balanced citizen and not ultra specialised tools.
> 3. Face the facts. This goes really fast. If you don't use an AI, your essay is going to be at the bottom of the heap. I'd estimate that out of the 35 essays, maybe 25 used AI for assistance.
Why didn't you write his essay before AI was a thing ? They'd be on top of the heap.
Why don't you pay a cheap freelance copywriter for like 1$ an hour to write the essays ?
The goal of education, especially at that level, is to build your basic skills, not to perform and deliver
> If you don't use an AI, your essay is going to be at the bottom of the heap.
If you tell AI to write a research paper, even in an abstract field like math or philosophy where experiments are not required, it's not going to actually generate new knowledge. That's the point of learning to write papers in school--evaluating the evidence, and proposing and supporting a thesis. The AI can output something that looks like a paper, but it doesn't actually have any coherent thesis.
I do think it will become more common to use AI as a typing aid (predicting next words/phrases), but having AI actually generate the thesis and arguments of the paper is not doing anything useful. That said, even if you didn't use it at all, I'm not convinced your paper would somehow be worse than all the others, it would just take longer to write.
I don't understand or like schools that take a tough stance on computer-checking for plagarism and so on, that's just sad, but if the school thinks using this tool is cheating, then I'd avoid it.
I've already heard that schools think that chatbots are a crisis, they can't do essay hand-ins anymore due to this, but maybe there are different approaches where they can handle it. Maybe some think for example that they can see through it and grade down if it is an apparent problem for the handed-in text anyway.
They will probably have to start to test for the specific required skills in-class. Nobody is trying to test your knowledge of the multiplication table, or the trigonometric relations by allowing you to take the assignment home (or use a calculator during the test).
About (3): Being #1 in school rarely translates to #1 in life. Make it not about "winning" but a growth mindset: what can we learn today. Learn to learn and accept that we are always growing and never experts.
A whole generation is going to start relying on AI and never do any actual writing or critical thinking for themselves. This is a problem when the average reading level for half of American adults is already below that of an 8th grader.
School curriculums need to adjust how they are creating assignments.
I had a great assignment in college where the professor gave you all the materials required. You could only use the various articles/essays/news clippings he gave you, and you wrote an essay based on that.
Assignments that don't rely on the internet for research would do a much better job teaching the material and forcing students to read the material. Students wouldn't be able to SparkNote or ChatGPT an essay.
I know this is a bit of a shortcut but have pocket calculators rendered mental math extinct? I see language models as calculators for text. True, the analogy isn't perfect, as you don't need to double-check your calculator's output.
While people should be able to calculate basic change in their head, especially if they're working with cash, when I worked in banking, we had explicit policies for cashiers to never ever do that in their head, no matter how simple, because the error rates are meaningfully different. A tired person in a hurry will occasionally get a brainfart and calculate something like 80-33 = 57 or pay out 200 as 20 times 20, so the policy was that you have to get the machine to calculate even the simplest stuff always.
When doing a math problem on an exam, I would always type in very simple intermediate calculations. Avoiding silly mistakes was part of the reason, but it was also so I could look through the calculator history when checking my work and see where each number came from. I would probably be inclined to do the same if I was a bank teller.
Being able to do basic math is definitely good, but I don't think that's nearly as bad as not knowing how to write without an AI. A cashier can reasonably expect to only need to work with a cash register in front of them, but if you can't write, you can't communicate your own ideas. And sometimes in life you are going to need to communicate and support your ideas, not just whatever the AI decides to spit out for that prompt.
I agree, but I'm also certain that using an AI writer will actually teach you to write better - like with art generators, you still need to look at the result and recognize it as good/better, and you form your taste by iterating.
I'm thinking about this from a McLuhanesque perspective, i.e. how the medium will shape the user, and in this case learning to write by going through large amounts of generated writing is just a more efficient way than the regular way (reading, getting own output graded, etc.).
> I've witness several cashiers getting their phone out to calculate basic change.
Don't point-of-sale devices do that already? The cashier inputs the total you gave them, and the point-of-sale device shows the exact change on its screen (and releases the cash drawer lock). It might seem useless, but it reduces the risk of an incorrect mental calculation (like R$ 100 - R$ 77 => R$ 33 instead of R$ 23). Some even display which coins and notes, and how many of each, the cashier should get for the change.
Well, I said extinct, but I wouldn't bet that on average mental arithmetic skills went down, perhaps they have, on the other hand it freed people up for learning about more complex operations and allowing them to perform those operations where otherwise they would be too costly.
At least in math they didn't learn those operations though, because they don't have a strong foundation in the basics to understand how it works. Mental arithmetic is often indicative of at least a somewhat stronger number sense, and this shows at upper level math courses. Even at the high school level, where I had to struggle with kids who were way too reliant on a calculator and couldn't do basic math because of it, and thus struggled to do anything more advanced or abstract.
Adding to this idea: perhaps learning how to "calculate" with text, or treating human semantic output as a programmable medium is the entire point of LLMs, as an evolution of our communication capabilities. For instance, I think the dramatic depreciation of low-quality text, various summarization techniques, etc. will act as a filter / evolutionary pressure on improving the quality and value of intellectual output.
I figure learning the utility of AI might be more important than critical thinking about why some stories character might be female. Don’t we have more interesting questions for these children to ponder? Plus editing requires critical thinking in a way that is less mechanical than essay writing.
What is the long term historical trend of the average 8th grade reading level when taking in all currently measured groups and taking in considerations in measurement changes?
From looking at the reading assignments my kids were given throughout middle and high school, it seems like the goal of the education system is to eliminate any joy a kid finds in reading. When they were in elementary school, they learned that any book with a silver or gold medallion on the cover was the book equivalent of vegetables. You read it because some adults think it will be good for you.
When I was in high school (late 1980’s), there was a time each week for what they called USSR - undisturbed, sustained, silent reading (in retrospect, USSR is a strange acronym). They didn’t care what we read and there were no assignments associated with the reading material. The goal was to choose something you want to read and read it, 45 minutes at a time.
Edit: I just looked it up and it turns out it wasn’t a local thing. I had the acronym wrong though - it’s uninterrupted sustained silent reading.
We had "silent reading" in elementary school (I remember it for sure in 6th grade, and I think earlier as well) and I remember most kids finding something they liked (at the very least they weren't goofing around), then middle school and high school is when the required reading started killing interest for everyone.
> So we went to through a rewrite exercise like in the article, improving the essay and our understanding of the issues.
The lesson you tought your son is far more valuable than what he could have possibly learned writing the essay as intended. AI is a tool he can use to augmentent his capabilities. But he shouldn't rely on it blindly. That's true for every technology (remember the Graphing Calculator on PowerPC Macs?).
> Next time I see her, I'll urge the teacher to adopt a similar approach as in the article.
Don't. This gives your son a competitive advantage over the kids who will simply turn up what the AI wrote (and get a 0) or spend too long writing their own essays.
> Fortunately for him, ChatGPT knew the story and was happy to write an essay with arguments. [Btw, according to him pretty much everyone in class routinely has the AI do their essays]
>He was about to send this essay in.
Why would you encourage your child to use ChatGPT like this? "Everyone else is doing it" seems like a great argument for NOT doing it yourself as those kids are going to seriously lack critical thinking skills in a few years as they offload that work to AIs.
Being able to effectively offload work to AI is an extremely valuable skill.
The Washington Post and Associated Press use AI to generate articles. In the real world, only results matter. Using technology isn't cheating.
The solution is not to make students manually do what AI can do perfectly well. Instead, give them assignments that are too complex for AI to solve unassisted. Raise the bar so high that a ChatGPT generated essay will earn them a C and they need to do substantially better than that to get an A.
But in this case the essay isn't the required result. Someone in Washington Post needed an article and got one from AI, but in the students' case absolutely no-one needed that essay, it wasn't assigned because the teacher wanted to receive x essays on that topic, it wasn't even necessarily assigned with the intent to learn how to write good essays (sometimes they are, but usually not).
That assignment was made with a goal of getting students practice doing careful reading, thinking and analysis, getting them to do a certain quantity of "repetitions" as exercise in that; the essay was just a token to signify that the exercise was done and get feedback on its form. And as you say, "only results matter" and the result obviously wasn't achieved if this was written by an AI. It's like having a weightlifter come in to a gym to repeatedly lift barbells with a crane - he's getting no results out of that.
Another poster made an analogy to the introduction of cheap calculators and it does make me wonder. If the AI tools get to the point where they really can do the analysis better, with more clarity, better logic, deeper understanding, and output true information then why DO we want to train humans to do this? It's no longer a valuable skill to be able to calculate by hand the square root of arbitrary numbers when a calculator can do it far faster with basically 100% fidelity. If AI tools get this way for other mental work then why should we train people to do it at all?
>It's like having a weightlifter come in to a gym to repeatedly lift barbells with a crane - he's getting no results out of that.
To use your own analogy, the weightlifter doesn't NEED to get physically strong anymore, they do it for competitive reasons or as a personal challenge.
Chess may also be a good example, currently no human can beat the best Chess AIs but since Chess isn't a particularly important Economic endeavor it just exists as a game and the chess AIs are kind of a novelty. But if chess were super critical to Economies then its unlikely humans would be involved in it at all for anything other than fun. The chess AIs would dominate completely.
My perspective here is viewing essays as a pedagogical tool - currently when we design a process (course/module/lecture/other content) for people to learn something, essays are one option that can be used when the intended activity is for the learner to study something and reflect on it for some time in order to achieve the desired learning outcome (and there's all kinds of pedagogical theory to judge when this is useful and when something else would be more effective instead). It's a means to an end, and that end is effective learning of something we've decided to be worth learning.
In this case, the main point of an essay is effectively as a "proof of work" token, to use a cryptocurrency analogy - and just as with cryptocurrencies, if some shortcut is found that allows faking that "proof of work" then the whole process starts to become shaky. In this case we don't really want to abandon that core work done by the learner (since in some cases we have good reasons to prefer that particular type of work) but this rises a need to some alternative to essays that would achieve a similar result but not be as vulnerable to fakes.
But regarding your initial point - IMHO if at some point we decide that people don't need to learn mental skills such as analysis of concepts because an AI can do all of that better, that certainly might be possible at some point, but at that point the whole notion of "why study stuff" disappears as well; I'd say that we stop needing to train people for that when we stop needing to train people, period; when the whole notion of studying any mental skill becomes as irrelevant as studying chess is now, and we can simply shut down higher education as such. And we (and our discussion) are not yet at this point, for now we still do care how to train people to do all of that.
>The solution is not to make students manually do what AI can do perfectly well. Instead, give them assignments that are too complex for AI to solve unassisted. Raise the bar so high that a ChatGPT generated essay will earn them a C and they need to do substantially better than that to get an A.
Maybe so but that's not what is currently happening.
It may be, and this part horrifies me, like using a calculator. I remember when they first became affordable, and at that time, the same horror. And from what I've seen, that horror came to fruition, as I see younger(20s to 40s) adults sometimes struggling with simple math.
EG, if the power is out, and manually handling change at a cash register.
Yes, there is intelligence, but also the honing and application of the same. And calculators reduced some of that.
Now enter chatgpt. Here goes the ability to hone arguments, forge essays, etc, etc. Gone!
Imagine when everyone gets a brain link, I bet all long term menory will fall into ruin, too.
I'm not sure the analogy is the same. To me it seems more like plagiarizing an essay or copying homework answers from Chegg or something like that. You haven't gained a new skill, you've just mindlessly pasted an answer from somewhere else. You CAN certainly use tools like ChatGPT in a powerful and useful way that amplifies your own abilities instead of atrophying them.
>Now enter chatgpt. Here goes the ability to hone arguments, forge essays, etc, etc. Gone!
It's just not the same. If you take an essay prompt and just paste it into ChatGPT and then paste the output into your homework submission what skill exactly are you exercising or learning? There IS value in having to think about a problem and make a coherent argument in text form and you certainly lose that if you never practice it because you just let ChatGPT do it. At least with some of the current models, e.g. ChatGPT, we can't even be sure of the veracity of it's outputs so not only are you not learning any skills you can potentially output blatantly false information as well.
AI is there, and more of it will come. The best you can do as a parent is to help your children to use it correctly. I'm pretty sure that using it correctly will help to improve the critical thinking, and will help children to express themselves in a better way (by virtue of example, or correction, etc -- possibilities are countless).
> New technology doesn't mean people will regress.
It doesn't always, and the value of different skills change over time... But where horse riding is pretty well obsoleted by the alternatives, unassisted writing and the associated cognitive skills are unlikely to be. It's a good idea to be concerned with potential regression there.
This comment makes a valid point, so I don’t know why it’s attracting downvotes.
> like this
This is the key phrase. There are ways to use LLMs to help with both research and writing that don’t involve surrendering your own part in the process.
Research with LLMs is often much better than search engines for surface level information. You should accept nothing that they say as truth, but they will often turn up names, book titles, etc that you can follow for more information.
Editing with LLMs is great - they were born for it. “What is a more expressive way to say ‘Lincoln’s presidency was a time of great change for the United States’?”, etc.
First, I'd say that using chatGPT actually lead to more critical thinking here since the essay had to be proofread for mistakes. More importantly though using ai to assist in writing is the future, there's no point in ignoring that. And as the OP points out, getting good output isn't that simple, it is a skill that needs to be trained.
Did it though? All I see here is having someone else do the work, and you making sure it looks about right. They found it some of it doesn't work properly so they had to actually do the work themselves.
Seems all they learned was to improve the questions to feed the bot, not ant critical thinking skills.
IMO making sure it looks right is just as much critical thinking as writing. The errors are quite insidious and hard to catch without paying close attention.
The kids are definitely going to have a lack of critical thinking skills. But the cat is out of the bag. If I think back to my days in university, I would have definitely used chatGPT, especially when deadlines were a few hours away. It takes an inordinate amount of self discipline to not use a tool when it’s right there and the alternative is failing the class. You’re never going to be able to use the honor system effectively here. It’s going to devolve into students submitting chatGPT essays and professors using chatGPT to grade them.
From there the only logical step is to have AI be a generally accepted tool in everyone’s life like we have smartphones now. The extremely long term view on this is the lack of need for education at all as soon as we get the future AGI version of chatGPT directly plugged into our brains.
Did I go into a coma and wake up into a world where wholesale cheating is celebrated? What the fuck is going on here? You're causally dropping that you're helping your son cheat? People are talking about your son having a "competitive advantage" over other students? You people sound like fucking psychopaths.
I was helping my son with his homework. He had to write an essay about why the gender of the protagonist might be female (although this is never mentioned in the short story). Fortunately for him, ChatGPT knew the story and was happy to write an essay with arguments. [Btw, according to him pretty much everyone in class routinely has the AI do their essays]
He was about to send this essay in. But as we were admiring the prose of the computer (which is much better than either of can), we found out that some of the arguments actually didn't make much sense. So we went to through a rewrite exercise like in the article, improving the essay and our understanding of the issues.
Next time I see her, I'll urge the teacher to adopt a similar approach as in the article.