Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My class required AI (oneusefulthing.substack.com)
185 points by alvis on Feb 20, 2023 | hide | past | favorite | 173 comments


In college I used to help friends fix their essays. We'd go through line by line and examine if the actual words on the page were clear and said what they intended. All of them wound up getting As on the second draft and every essay thereafter. It was just about learning how to look at the problem.

We live in a world of emails and slacks and documents and text messaging. Literacy matters. Writing clearly matters.

What I would actually do with this is not use GPT to write your essay, but have GPT write a flawed essay and ask the students to rewrite it. Fix the logical fallacies, tighten the language, check the facts, etc.


> but have GPT write a flawed essay and ask the students to rewrite it.

There is also benefit in the opposite - automating your editorial role. Have the student write an ok essay, then rerun in through the AI to clarify, or to raise questions about ambiguity, word choices, general background knowledge assumptions, etc ...

In practice I expect both styles will be be used to produce superior human-augmented writing.


Excellent article.

I was helping my son with his homework. He had to write an essay about why the gender of the protagonist might be female (although this is never mentioned in the short story). Fortunately for him, ChatGPT knew the story and was happy to write an essay with arguments. [Btw, according to him pretty much everyone in class routinely has the AI do their essays]

He was about to send this essay in. But as we were admiring the prose of the computer (which is much better than either of can), we found out that some of the arguments actually didn't make much sense. So we went to through a rewrite exercise like in the article, improving the essay and our understanding of the issues.

Next time I see her, I'll urge the teacher to adopt a similar approach as in the article.


I am delighted to hear that children are already adopting the new tech. Sure there will be kinks for a while where no one bothers with the reading or writing assignments. But shortly that will smooth out as everyone gets on even ground.

Writing those essays will become trivial and irrelevant and be replaced with more interesting tasks with the assistance of AI.

I always think of Star Trek; instead of long-division math questions they will learn skills like “computer, if the trajectory of this rouge planet is reversed and we assume that is was ejected after some massive explosion, calculate the possible causes based on previously observed similar events and report the top candidate cause and confidence interval”.

An example from my academic field: instead of biased, unreproducible research papers we could publish: [1] raw data, [2] hypothesis context, [3] model, [4] seed. The output result can be read based on your own AI chatbot preferences but the conclusion is consistent and reproducible.


But surely the point of essay writing is to learn how to structure your thoughts into a coherent argument? If students aren't doing this, how will they learn this skill?


Or just learning how to write - most people simply can't write well. It's great that people are learning how to carefully craft a Google search query to give them the results they want, but they'll never be able to write a vulnerable love letter to their wife.


The point then is that they won't have to anymore.

However, I also think that you learn by imitation. If you have a text assignment, and that you make a sloppy job, and get a bad grade, what did you learn? But if you do the same text assignment, and improve it by iterating several prompts, then at least you learn something.

Also, learning requires motivation. It's futile to compare learning methods if there is no motivation (e.g; we assume students want to cheat). If instead we assume motivation, and we lower the bar required for learning, then these students will learn more with AI than without.

The only point I'm worried about is about imagination. For instance, it's easy to ask AI for example of names, stories, etc. But there is also benefit in spending time searching these things yourself. Not sure what the tradeoff will be there.


People won't be required to write anymore? Are you high?

The problem with essays is that what they're asking people to do aren't actually what we want people to be able to do. Sure, you can get a chat bot to write out some arguments about why a character in a story might be female. But you can't get it to, say, write your own experiences, or write your own novel thoughts about something, or basically anything which the AI isn't already familiar with.

AI as a spell checker and maybe filler generator works. AI as an essay generator works. AI as a replacement for the stuff you'll actually want to use your writing skills for past school doesn't work.


Good point; the post even reinforces how the students couldn't use AI to do the whole assignment because in addition to writing the essay with ChatGPT, they also had to "write a reflection at the end on how the AI did". The AI has no meaningful idea of how it did, or how to reflect on a student's subjective experience using it.


> The point then is that they won't have to anymore.

They'll certainly still need to use rhetoric and argumentation throughout their lives. That's the point about writing essays as a pedagogical tool; it's not about learning how to write a 3x5 essay, but rather how to construct the arguments that go into that essay.

A decade from now, what happens when one of these students who coasted through school using AI-generated essays is asked to argue on-the-fly, like in a work meeting? What if it's for something high stakes, that could impact their employment or future job trajectory at theory company? You need to be able to develop an argument with logical, supporting evidence, and articulate the complete argument in a compelling, persuasive manner -- oftentimes, on-the-fly, out-loud.

It sounds pastiche to argue with a student that "you won't always have an AI to help you" but in contrast the with a calculator, there are truly scenarios where you definitely _will not_ be able to take the time to reach for an AI crutch to build your argument for you.


> If you have a text assignment...and get a bad grade, what did you learn?

Ideally you learned what mistakes you made and how to avoid them in the future. The reliability of this process will depend a lot on the teacher, but I don't see how putting AI in the mix really changes this. You will surely need to learn /something/ about writing before you could even begin to evaluate the quality of a ChatGPT essay.


>The point then is that they won't have to anymore.

One should not ever feel obligated to write a love letter, my dude.


In debate class. I’m not convinced that essay writing has ever been a good way of learning how to structure thoughts into a coherent argument. I think essays (especially the terrible 5-paragraph essay) exist mostly because they are easy to grade.

Essays are so static. I think a debate format where you are immediately challenged on your arguments or asked to elaborate would give a much better idea of how well the student has mastered the material.


I know it's easy to get caught up in a few sustained decades of peace and progress (for those that have had it) and imagine that we can always rely on the tools that make our lives easy today, but it's actually quite important that people have foundational skills.


I suspect that of we have a disaster big enough to lose these computer-based tools, good writing will be the least of our worries.


Is this a situation where it's not cheating to use a chatbot for the output? I understand it becomes nuanced if you edit the output, but still.


This is cheating and while it might not be picked up in a high school essay, it will certainly be flagged for plagiarism at almost all Universities. It's very dangerous to do this unless your course specifically asks you to answer using AI tools, like the course in the OP.


When you say "it will be flagged" do you mean that it would be considered cheating under existing academic dishonesty policies? or do you mean that the plagiarism-detection tools currently in use at universities can reliably detect the output of ChatGPT, new Bing, etc. that were only released publicly in the last few months


If you're just worried about the pragmatics of being caught, I'd keep in mind that ChatGPT doesn't actually have a very high "dynamic range" given some specific set of inputs. If you use it enough, or explore the same topics and prompts enough, you'll quickly start to see structures, words, phrases, presentations that its output gravitates towards.

These may not be enough of a watermark to generally detect what was written by an AI and what wasn't yet, but it does suggest that the proliferation of AI-written assignment essays will quickly produce a catalog of plagiarism sources that works with existing tools.

In other words, once people submit enough AI-written essays about popular assignment A, existing plagiarism detection tools will easily start to notice the phrases/sequences/structures that it statistically gravitates towards (because that's what it does by design!) and start detecting these in later work. That software won't know what's AI written or not, but it will think that your submission may be plagiarized because a lot of it looks pretty specifically familiar.


Going a bit further... the AI tooling will also be used elsewhere, in things that not-just-students read. So writing style will potentially drift/evolve to be like AI writers, and this will muddy the detection efforts. I know I've noticed numerous times how I pick up on and start using phrasing (and other tells automatically), matching what others do over the years.

Of course, I wouldn't be particularly surprised if the above effect doesn't manifest. It's very dependent on how much AI writing spreads across numerous areas, and whether kids-these-days do any measurable amounts of long-form reading.


> can reliably detect the output of ChatGPT.

I imagine schools will have to put minimal value in out of person assignments, relying on proctored exams for final grades. This might be an investment opportunity!


If students acknowledge that they used ChatGPT and what the prompt was (as required by the OP), then it's not plagiarism.

But I think many professors who don't accept AI as a writing aide would say that it doesn't satisfy the assignment and is worth zero points.


Using AI output isn’t plagiarism. Finders keepers. You push the button, you get the copyright, just like a camera.

Now, whether it is ghostwriting is another matter.


Copyright doesn't matter. It's about academic honesty, not whether you have the right to use the work. You can even plagiarize your own work.


Plagiarism and copyright infringement have some overlap but they're distinct. You can be guilty of plagiarism by taking something in the public domain and presenting it as your own work.


It's not settled law yet. Many jurisdictions seem inclined to assign no copyright at all to AI-generated works, like the United States and the European Union.


What about simply helping your child with their homework at all? That could also count as cheating if you go with a strict interpretation.


For a reasonable interpretation, I offer this (in the context of programming): "The course recognizes that interactions with classmates and others can facilitate mastery of the course’s material. However, there remains a line between enlisting the help of another and submitting the work of another. [--] Working with (and even paying) a tutor to help you with the course, provided the tutor does not do your work for you." https://cs50.harvard.edu/x/2023/honesty/


This is where AI could act as a good equalizer.

Having access to people who can sit down with you and explain things to you when you get stuck, or who can point out mistakes in your existing understanding or your work, can make a huge difference in the outcomes of students.

If AI gets good enough to fill that role, everyone can have that access (or at least something close enough).


Why would you expect the AI to not be a paid service like a human tutor is? ChatGPT has a $0 tier, but it's also in a publicity phase, and they already have a "pro" subscription tier. Microsoft/GitHub likewise are charging for extended access to Copilot, and so is Stability with their Dream Studio application.


It'll be cheaper for the school, the parents or the social services to provide you access to an AI tutor than to more human instruction. Already, learning games can prevent or cure (some) learning disabilities via engagement and repetition in a cost-effective way.


Pretty sure that would require AGI, and is at least well beyond the means of LLMs.


It can already do super useful stuff, basically google search, gather results, summarize, (hallucinate, ) which makes it faster and sometimes easier to do this kind of research. Let's be honest, not everyone is so good at reading comprehension, it's an important thing taught and tested in school after all. ChatGPT can basically help those lacking in reading comprehension and research skills and create summaries for them.


You can ask ChatGPT why your code is not working right, and often it will give a helpful suggestion. Sure, the suggestion may be wrong or misleading, but it can help you get unstuck in any case.


That's a huge distance away from it being a tutor. A tutor would have a plan for how to educate you, and would consistently choose examples and problems to present in order to demonstrate the knowledge they are seeking to impart. It's not about giving you little hints, it's essentially the very opposite.


In my mind, what you are describing is the responsibility of the teacher. I was thinking more along the lines of how parents help their children in their homework. Like the grandparent wrote:

> who can sit down with you and explain things to you when you get stuck, or who can point out mistakes in your existing understanding or your work

That being said, I don't see why what you describe would require AGI. It won't be an excellent teacher or tutor in those tasks, but it may be good enough and make up in the price, availability and repetition.


If you wrote your child's essay that is cheating. If the child cannot formulate the arguments themselves, I'm not sure what the point of the assignment is.


You can help your kid understand something without telling them the answer directly.


The article talks about it being required, so I assumed this was a similar scenario, personally.


1. I'm not going to take the moral highroad here. He's clearly going for an IT education. I think it's great that he is exploring these options.

2. He involved me, and we're making this a great learning experience.

3. Face the facts. This goes really fast. If you don't use an AI, your essay is going to be at the bottom of the heap. I'd estimate that out of the 35 essays, maybe 25 used AI for assistance.


I don't know why you want to hobble your child. He will be generating essays but one day he will be in a situation he cannot generate an essay, he won't be able to google, like say a presentation to a client or colleagues, and he will be caught, whereas his competition who do not need a crutch to write (or for a presentation, say make arguments on his feet) will not. You certainly don't need to overwork your kid, but why even bother educating your child if this is how you approach education? May be he needs to legally finish high school, but after HS why not let him chatGPT his way into a job if you think it is sufficient?

>your essay is going to be at the bottom of the heap. I'd estimate that out of the 35 essays, maybe 25 used AI for assistance.

This is absolutely a terrible attitude. Who cares about the "top essay" if they're all AI generated. None of the 25 students are good students, and unless they will fail into a good job at daddy's hedgefund, you will simply not become a proficient adult letting chatGPT do your homework for you. The entire point of writing essays is to formulate opinions and thoughts and defend them. I understand school sucks and sometimes you have to write essays for things you don't care about, and that is a problem. But, at some point, you need to learn how to write if you want to be a professional. Not doing so just means you're holding yourself back for reasons I don't quite understand.


i agree. I think this is analogous to flying a modern airliner. After all, anyone can set the autopilot. Easy, right? Easy until a situation arises where manual flying or human intervention is required. That is when years of tedious training becomes instantly relevant, when the machine can't handle an edge case that requires a novel solution. Can you imagine a medical doctor or an airline pilot who chatgpted his/her way into a license?


Bottom of the heap? Ai essays suck. They’re painful to read for the most part. The only reason they look good is because the average person has undeveloped writing and critical thinking skills, so that comparatively they sometimes look okay. Refusing to learn to write is the opposite of the right approach here


When I started secondary school in the UK, the home economics teacher (i.e. cooking and sewing) in the first lesson had a rant about how she absolutely did not want to see anyone write up their cooking with the phrase "I think it tasted quite nice" because it was a generic and content-free cliché.

That's the standard ChatGPT has to beat to make you look good at school.

Likewise, the average comments section is all you have to beat to seem erudite as an adult.


>Likewise, the average comments section is all you have to beat to seem erudite as an adult.

I hope you do not seriously think "comment sections" is how you measure adult scale intelligence. I admit many people are not as intelligent as they should be but that is in fact a source of many of life's ills, and if you care about your life and your society you should want better than "beating the standard."


I don't think that intelligence itself is likely to change depending on the level to which one's essay writing skills happen to be trained.

This is one reason why I wrote "erudite" instead of "intelligent".


The power of writing is that it is the way we formulate thoughts, opinions, and arguments. Language is the key way people frame their thoughts, and writing is one significant way to develop one's language skills. Writing is more than syntax, grammar, spelling and word choice--that's why you're not considered "unintelligent" for using a thesaurus or spell-check.

Anyhow, my entire point is "beating the comment section" isn't valuable as a threshold because as you hint at, it isn't really a place to find intelligent discussion.


None of those things were part of what they taught me at the mandatory school lessons, only at the entirely optional after-school "convincing communication" 45 minute session they had one time.

Actual school was basically "prove you actually read this Shakespeare play we assigned to you by mentioning some of these standard points".

As for comments sections… that they're low quality is the reason I chose them as an example, beating them necessarily improves the quality of global discourse - the people writing them don't recognise their poor quality.

https://xkcd.com/810/


I don't think this is accurate at all. Our sales team is using ChatGPT and others to rewrite sales copy to make it more interesting and increase engagement.

Initial results are WILDLY successful.


If the user is a capable writer who can organize their thoughts and recognize the difference between good and bad copy, they're going to get much more coherent results from tools like ChatGPT. I'd contend that an average primary/secondary-schooler is basically expected to develop those exact skills, without which their AI generated essays will have a multitude of problems just like human-composed ones do.


ChatGPT is way above your average high school level.


So students should… not try to learn how to organize their thoughts?


> Bottom of the heap? Ai essays suck. They’re painful to read for the most part.

I'm just responding to that part


> 1. I'm not going to take the moral highroad here. He's clearly going for an IT education. I think it's great that he is exploring these options.

Every developed country has a general path to education because you want well balanced citizen and not ultra specialised tools.

> 3. Face the facts. This goes really fast. If you don't use an AI, your essay is going to be at the bottom of the heap. I'd estimate that out of the 35 essays, maybe 25 used AI for assistance.

Why didn't you write his essay before AI was a thing ? They'd be on top of the heap.

Why don't you pay a cheap freelance copywriter for like 1$ an hour to write the essays ?

The goal of education, especially at that level, is to build your basic skills, not to perform and deliver


> pay a cheap freelance copywriter for like 1$ an hour to write the essays

I am willing to bet there is no freelance copywriter doing American high school essays for $1 an hour. Or at least none that aren’t using chatGPT :D


> If you don't use an AI, your essay is going to be at the bottom of the heap.

If you tell AI to write a research paper, even in an abstract field like math or philosophy where experiments are not required, it's not going to actually generate new knowledge. That's the point of learning to write papers in school--evaluating the evidence, and proposing and supporting a thesis. The AI can output something that looks like a paper, but it doesn't actually have any coherent thesis.

I do think it will become more common to use AI as a typing aid (predicting next words/phrases), but having AI actually generate the thesis and arguments of the paper is not doing anything useful. That said, even if you didn't use it at all, I'm not convinced your paper would somehow be worse than all the others, it would just take longer to write.


I don't understand or like schools that take a tough stance on computer-checking for plagarism and so on, that's just sad, but if the school thinks using this tool is cheating, then I'd avoid it.

I've already heard that schools think that chatbots are a crisis, they can't do essay hand-ins anymore due to this, but maybe there are different approaches where they can handle it. Maybe some think for example that they can see through it and grade down if it is an apparent problem for the handed-in text anyway.


They will probably have to start to test for the specific required skills in-class. Nobody is trying to test your knowledge of the multiplication table, or the trigonometric relations by allowing you to take the assignment home (or use a calculator during the test).


About (3): Being #1 in school rarely translates to #1 in life. Make it not about "winning" but a growth mindset: what can we learn today. Learn to learn and accept that we are always growing and never experts.


An IT degree isn't a zero sum game. Try asking the AI how to teach a child something


A whole generation is going to start relying on AI and never do any actual writing or critical thinking for themselves. This is a problem when the average reading level for half of American adults is already below that of an 8th grader.


School curriculums need to adjust how they are creating assignments.

I had a great assignment in college where the professor gave you all the materials required. You could only use the various articles/essays/news clippings he gave you, and you wrote an essay based on that.

Assignments that don't rely on the internet for research would do a much better job teaching the material and forcing students to read the material. Students wouldn't be able to SparkNote or ChatGPT an essay.


In my country most exams are in person with no phone/internet.

You can cheat all year if you want for homework &co, they're just here to help you get good for the monthly/yearly exams


I know this is a bit of a shortcut but have pocket calculators rendered mental math extinct? I see language models as calculators for text. True, the analogy isn't perfect, as you don't need to double-check your calculator's output.


> I know this is a bit of a shortcut but have pocket calculators rendered mental math extinct?

I've witness several cashiers getting their phone out to calculate basic change. Things like 1 euros - 20ct

So yes definitely


While people should be able to calculate basic change in their head, especially if they're working with cash, when I worked in banking, we had explicit policies for cashiers to never ever do that in their head, no matter how simple, because the error rates are meaningfully different. A tired person in a hurry will occasionally get a brainfart and calculate something like 80-33 = 57 or pay out 200 as 20 times 20, so the policy was that you have to get the machine to calculate even the simplest stuff always.


When doing a math problem on an exam, I would always type in very simple intermediate calculations. Avoiding silly mistakes was part of the reason, but it was also so I could look through the calculator history when checking my work and see where each number came from. I would probably be inclined to do the same if I was a bank teller.

Being able to do basic math is definitely good, but I don't think that's nearly as bad as not knowing how to write without an AI. A cashier can reasonably expect to only need to work with a cash register in front of them, but if you can't write, you can't communicate your own ideas. And sometimes in life you are going to need to communicate and support your ideas, not just whatever the AI decides to spit out for that prompt.


I agree, but I'm also certain that using an AI writer will actually teach you to write better - like with art generators, you still need to look at the result and recognize it as good/better, and you form your taste by iterating.

I'm thinking about this from a McLuhanesque perspective, i.e. how the medium will shape the user, and in this case learning to write by going through large amounts of generated writing is just a more efficient way than the regular way (reading, getting own output graded, etc.).


> I've witness several cashiers getting their phone out to calculate basic change.

Don't point-of-sale devices do that already? The cashier inputs the total you gave them, and the point-of-sale device shows the exact change on its screen (and releases the cash drawer lock). It might seem useless, but it reduces the risk of an incorrect mental calculation (like R$ 100 - R$ 77 => R$ 33 instead of R$ 23). Some even display which coins and notes, and how many of each, the cashier should get for the change.


Well, I said extinct, but I wouldn't bet that on average mental arithmetic skills went down, perhaps they have, on the other hand it freed people up for learning about more complex operations and allowing them to perform those operations where otherwise they would be too costly.


At least in math they didn't learn those operations though, because they don't have a strong foundation in the basics to understand how it works. Mental arithmetic is often indicative of at least a somewhat stronger number sense, and this shows at upper level math courses. Even at the high school level, where I had to struggle with kids who were way too reliant on a calculator and couldn't do basic math because of it, and thus struggled to do anything more advanced or abstract.


Why not mitigate both, force people to learn the basics, allow them to use the tools later.

It isn't mutually esclusive


Adding to this idea: perhaps learning how to "calculate" with text, or treating human semantic output as a programmable medium is the entire point of LLMs, as an evolution of our communication capabilities. For instance, I think the dramatic depreciation of low-quality text, various summarization techniques, etc. will act as a filter / evolutionary pressure on improving the quality and value of intellectual output.


Well, maybe not pocket calculators, but phone calculators for sure.


I figure learning the utility of AI might be more important than critical thinking about why some stories character might be female. Don’t we have more interesting questions for these children to ponder? Plus editing requires critical thinking in a way that is less mechanical than essay writing.


What is the long term historical trend of the average 8th grade reading level when taking in all currently measured groups and taking in considerations in measurement changes?


From looking at the reading assignments my kids were given throughout middle and high school, it seems like the goal of the education system is to eliminate any joy a kid finds in reading. When they were in elementary school, they learned that any book with a silver or gold medallion on the cover was the book equivalent of vegetables. You read it because some adults think it will be good for you.

When I was in high school (late 1980’s), there was a time each week for what they called USSR - undisturbed, sustained, silent reading (in retrospect, USSR is a strange acronym). They didn’t care what we read and there were no assignments associated with the reading material. The goal was to choose something you want to read and read it, 45 minutes at a time.

Edit: I just looked it up and it turns out it wasn’t a local thing. I had the acronym wrong though - it’s uninterrupted sustained silent reading.

http://ejournal.unp.ac.id/index.php/jelt/article/view/101466


We had "silent reading" in elementary school (I remember it for sure in 6th grade, and I think earlier as well) and I remember most kids finding something they liked (at the very least they weren't goofing around), then middle school and high school is when the required reading started killing interest for everyone.


> So we went to through a rewrite exercise like in the article, improving the essay and our understanding of the issues.

The lesson you tought your son is far more valuable than what he could have possibly learned writing the essay as intended. AI is a tool he can use to augmentent his capabilities. But he shouldn't rely on it blindly. That's true for every technology (remember the Graphing Calculator on PowerPC Macs?).

> Next time I see her, I'll urge the teacher to adopt a similar approach as in the article.

Don't. This gives your son a competitive advantage over the kids who will simply turn up what the AI wrote (and get a 0) or spend too long writing their own essays.


> This gives your son a competitive advantage

In what competition?


Yes, this is a competitive advantage for the next...six months until the other students figure out they can do this.


> Fortunately for him, ChatGPT knew the story and was happy to write an essay with arguments. [Btw, according to him pretty much everyone in class routinely has the AI do their essays]

>He was about to send this essay in.

Why would you encourage your child to use ChatGPT like this? "Everyone else is doing it" seems like a great argument for NOT doing it yourself as those kids are going to seriously lack critical thinking skills in a few years as they offload that work to AIs.


Being able to effectively offload work to AI is an extremely valuable skill.

The Washington Post and Associated Press use AI to generate articles. In the real world, only results matter. Using technology isn't cheating.

The solution is not to make students manually do what AI can do perfectly well. Instead, give them assignments that are too complex for AI to solve unassisted. Raise the bar so high that a ChatGPT generated essay will earn them a C and they need to do substantially better than that to get an A.


But in this case the essay isn't the required result. Someone in Washington Post needed an article and got one from AI, but in the students' case absolutely no-one needed that essay, it wasn't assigned because the teacher wanted to receive x essays on that topic, it wasn't even necessarily assigned with the intent to learn how to write good essays (sometimes they are, but usually not).

That assignment was made with a goal of getting students practice doing careful reading, thinking and analysis, getting them to do a certain quantity of "repetitions" as exercise in that; the essay was just a token to signify that the exercise was done and get feedback on its form. And as you say, "only results matter" and the result obviously wasn't achieved if this was written by an AI. It's like having a weightlifter come in to a gym to repeatedly lift barbells with a crane - he's getting no results out of that.


Another poster made an analogy to the introduction of cheap calculators and it does make me wonder. If the AI tools get to the point where they really can do the analysis better, with more clarity, better logic, deeper understanding, and output true information then why DO we want to train humans to do this? It's no longer a valuable skill to be able to calculate by hand the square root of arbitrary numbers when a calculator can do it far faster with basically 100% fidelity. If AI tools get this way for other mental work then why should we train people to do it at all?

>It's like having a weightlifter come in to a gym to repeatedly lift barbells with a crane - he's getting no results out of that.

To use your own analogy, the weightlifter doesn't NEED to get physically strong anymore, they do it for competitive reasons or as a personal challenge.

Chess may also be a good example, currently no human can beat the best Chess AIs but since Chess isn't a particularly important Economic endeavor it just exists as a game and the chess AIs are kind of a novelty. But if chess were super critical to Economies then its unlikely humans would be involved in it at all for anything other than fun. The chess AIs would dominate completely.


My perspective here is viewing essays as a pedagogical tool - currently when we design a process (course/module/lecture/other content) for people to learn something, essays are one option that can be used when the intended activity is for the learner to study something and reflect on it for some time in order to achieve the desired learning outcome (and there's all kinds of pedagogical theory to judge when this is useful and when something else would be more effective instead). It's a means to an end, and that end is effective learning of something we've decided to be worth learning.

In this case, the main point of an essay is effectively as a "proof of work" token, to use a cryptocurrency analogy - and just as with cryptocurrencies, if some shortcut is found that allows faking that "proof of work" then the whole process starts to become shaky. In this case we don't really want to abandon that core work done by the learner (since in some cases we have good reasons to prefer that particular type of work) but this rises a need to some alternative to essays that would achieve a similar result but not be as vulnerable to fakes.

But regarding your initial point - IMHO if at some point we decide that people don't need to learn mental skills such as analysis of concepts because an AI can do all of that better, that certainly might be possible at some point, but at that point the whole notion of "why study stuff" disappears as well; I'd say that we stop needing to train people for that when we stop needing to train people, period; when the whole notion of studying any mental skill becomes as irrelevant as studying chess is now, and we can simply shut down higher education as such. And we (and our discussion) are not yet at this point, for now we still do care how to train people to do all of that.


>The solution is not to make students manually do what AI can do perfectly well. Instead, give them assignments that are too complex for AI to solve unassisted. Raise the bar so high that a ChatGPT generated essay will earn them a C and they need to do substantially better than that to get an A.

Maybe so but that's not what is currently happening.


No defending, merely pondering.

It may be, and this part horrifies me, like using a calculator. I remember when they first became affordable, and at that time, the same horror. And from what I've seen, that horror came to fruition, as I see younger(20s to 40s) adults sometimes struggling with simple math.

EG, if the power is out, and manually handling change at a cash register.

Yes, there is intelligence, but also the honing and application of the same. And calculators reduced some of that.

Now enter chatgpt. Here goes the ability to hone arguments, forge essays, etc, etc. Gone!

Imagine when everyone gets a brain link, I bet all long term menory will fall into ruin, too.


I'm not sure the analogy is the same. To me it seems more like plagiarizing an essay or copying homework answers from Chegg or something like that. You haven't gained a new skill, you've just mindlessly pasted an answer from somewhere else. You CAN certainly use tools like ChatGPT in a powerful and useful way that amplifies your own abilities instead of atrophying them.

>Now enter chatgpt. Here goes the ability to hone arguments, forge essays, etc, etc. Gone!

It's just not the same. If you take an essay prompt and just paste it into ChatGPT and then paste the output into your homework submission what skill exactly are you exercising or learning? There IS value in having to think about a problem and make a coherent argument in text form and you certainly lose that if you never practice it because you just let ChatGPT do it. At least with some of the current models, e.g. ChatGPT, we can't even be sure of the veracity of it's outputs so not only are you not learning any skills you can potentially output blatantly false information as well.


For me this is the wrong attitude.

AI is there, and more of it will come. The best you can do as a parent is to help your children to use it correctly. I'm pretty sure that using it correctly will help to improve the critical thinking, and will help children to express themselves in a better way (by virtue of example, or correction, etc -- possibilities are countless).

New technology doesn't mean people will regress.


> New technology doesn't mean people will regress.

It doesn't always, and the value of different skills change over time... But where horse riding is pretty well obsoleted by the alternatives, unassisted writing and the associated cognitive skills are unlikely to be. It's a good idea to be concerned with potential regression there.


This comment makes a valid point, so I don’t know why it’s attracting downvotes.

> like this

This is the key phrase. There are ways to use LLMs to help with both research and writing that don’t involve surrendering your own part in the process.

Research with LLMs is often much better than search engines for surface level information. You should accept nothing that they say as truth, but they will often turn up names, book titles, etc that you can follow for more information.

Editing with LLMs is great - they were born for it. “What is a more expressive way to say ‘Lincoln’s presidency was a time of great change for the United States’?”, etc.


First, I'd say that using chatGPT actually lead to more critical thinking here since the essay had to be proofread for mistakes. More importantly though using ai to assist in writing is the future, there's no point in ignoring that. And as the OP points out, getting good output isn't that simple, it is a skill that needs to be trained.


Did it though? All I see here is having someone else do the work, and you making sure it looks about right. They found it some of it doesn't work properly so they had to actually do the work themselves.

Seems all they learned was to improve the questions to feed the bot, not ant critical thinking skills.


IMO making sure it looks right is just as much critical thinking as writing. The errors are quite insidious and hard to catch without paying close attention.


But OP didn't even intend to do that originally, he said they were about to submit a 100% fake essay before starting that proofread process.


The kids are definitely going to have a lack of critical thinking skills. But the cat is out of the bag. If I think back to my days in university, I would have definitely used chatGPT, especially when deadlines were a few hours away. It takes an inordinate amount of self discipline to not use a tool when it’s right there and the alternative is failing the class. You’re never going to be able to use the honor system effectively here. It’s going to devolve into students submitting chatGPT essays and professors using chatGPT to grade them.

From there the only logical step is to have AI be a generally accepted tool in everyone’s life like we have smartphones now. The extremely long term view on this is the lack of need for education at all as soon as we get the future AGI version of chatGPT directly plugged into our brains.


Well when I was in school the graded essays were written in class


Did I go into a coma and wake up into a world where wholesale cheating is celebrated? What the fuck is going on here? You're causally dropping that you're helping your son cheat? People are talking about your son having a "competitive advantage" over other students? You people sound like fucking psychopaths.


There was a time when people used to come to me create search queries. This was during the Alta Vista era. Constructing queries to find what you’re looking for was a skill. I feel like we’re in that era with AI. I suspect soon this will all feel like a distant memory.


I was an ace searcher until Google went to hell around '08 or '09.

It's not that they got so good the skill became irrelevant, but that they changed so none of that works anymore. All the other major search engines followed suit. Now when there's something I could have found that way, that's not coming up with a naïve search, I just... can't. :-/


Ironically the switch to using "AI" for search instead of some version of pattern matching is a big part of what has made it useless. Now you get what it thinks an average person wanted (or what will make them click ads) rather than what matches your query


Right. It's as if the inevitable collision between the incentives of the search engine and advertising has finally gone the completely predictable way.


I don't think the main reason for the change is advertising. I think it is simply Google being tuned more for average people, so Google-fu is no longer needed as much as it was before. I see it as a generally positive thing, but I agree that pattern matching is probably still useful for advanced users which should be opt-in.


> so Google-fu is no longer needed as much as it was before

It's still needed, it just no longer works.


I was just thinking that we're about to have people going around claiming to be "experts in AI" because they know how to use ChatGPT, despite the fact that they can't even produce "hello world" in Python (without asking ChatGPT to do it for them, I guess?). Like some people seem to think that kids these days are all like Steve Wozniak because they grew up with access to an iPad.

Anyway, yeah, maybe there is actually some valuable skill required for this? But I agree that skill in operating new technology probably doesn't hold its value for very long.


Yes, but it does not stop various hype sellers on LinkedIn selling 100 websites to do X, where X is just a custom prompt. I will admit it is tempting to ride the same wave ( I actually have a useful common issue for my industry ).


Sounds a great way to filter out fake "experts in AI", since real "experts in AI" are not particularly comfortable with the "AI" term to start with !

https://acoup.blog/2023/02/17/collections-on-chatgpt/comment...


Skill in operating a new technology sounds like a career to me!


What's the upper bound for this particular skill though? I don't see any - there's no limit to how good you can get with designing search vectors for information in latent space...


Does anyone feel like the style of searching they learned from that era are actively being punished nowadays? I used to get pretty good results just by just listing out keywords with quotes where needed. Nowadays I feel like google gives better results if I use natural language, which is a lot more verbose to type out.


I used to rely on the file type:pdf argument to find ebooks. It used to be trivial to find most ebooks this way. Now it doesn’t work at all, I’m completely sure Google broke this functionality to keep publishers happy. A similar example is googling a phone number, which never works.


The new search term to add for finding books is 'epub'. This will yield what you want if it's reasonably available.


On the other hand, I'm finding the opposite with ChatGPT, that I'm learning that it responds just as well with some compression, or rather that verbosity doesn't matter as long as the information is there.


I feel like this is especially the case right now for image generation. I can craft queries for chatGPT pretty reliably for the precise thing that I need. But Midjourney seemed to require real skill that I don't possess.


I think that using allowing students to AI to write school essays is like allowing students athletes to use a hoverboard when they're told to run a mile. It's the act of running that's the point, not the end result of running in a circle

Likewise, nobody is going to give a damn about your school essays except your teacher. It's the practice of synthesizing information on your own that important.


I agree that students using LLMs to do work for them is akin to your example. However, what is described in the article shows thoughtful exercises on engaging the same critical thinking skills you mention, so I see no problem with this approach. Might not want to do that for every assignment, but it seems the author had a pretty skillful approach to developing well rounded students in the context that students will be dealing with outside the classroom.


It's engaging critical thinking around the use of AI tools. Which is not a super valuable place to be applying that critical thinking. I feel like this is the equivalent of asking students in the 60s and 70s to learn to use a complicated machine that made punch cards for programming, and to apply critical thinking to the operation of the punch machine. They should be focusing on the the end product itself (writing, or software development in my example) instead of AI tools (punch machines) that will be obsolete very soon.


I took it as the focus being on using LLMs to enhance critical thinking (like using calculators in higher math classes). That would be taking the output of these systems and use critical thinking in order to examine the material, feeding it rewrites, as well as using editorial skills to see what goes in the final assignment. I see no problem in that.

However, upon rereading the article, it does seem to say that the focus was on using the tool and prompt engineering rather than using it as a tool for extending students' own cognition.


I see two ways. Either let the students write school essays only in school while being supervised, or explicitly allow the use of ChatGPT and make them reflect the use of ChatGPT. Either way, this requires more effort by the teacher. I am sure that most schools will fail. Writing essays will seem quaint, annoying and superfluous. We as a society will need to find new ways to train.


AI prompts remind me if making a wish to a genie. The thought going into the input wish drastically effects the output, and we quite often don't get what we expect.


I call it "searching for things that don't exist".


Fascinating. My dad is a professor and over Christmas break I showed him ChatGPT and he became obsessed. So much so, in fact, that he used it to generate a syllabus that prohibits the use of AI-assisted writing!

The cognitive dissonance is wild to me, and I think approaching AI as a tool to be embraced is much healthier.


> The cognitive dissonance is wild to me

I guess writing for class has always been about the process and not the result. Nobody has ever wanted to read an essay written by a student. But, the process of writing the essay is useful; collect sources, read them, distill down the important points, join the points with sentences. ChatGPT is all about the end result. Assignments are about the process.


In my school we had a motto: "No excuses, just results." I guess we have no excuse to not produce results anymore.


sure, then why would chatGPT hinder that process? ChatGPT, will just assemble the words. You can still create a better document working along side chatGPT by critically thinking about the output it's spewing and correcting it. There is probably nuance there but I that chatGPT creating a generation of zombies is over stated. chatGPT is like a calculator used in mathematics.


And, as a former secondary math teacher, I would argue that the use of a calculator has absolutely created a generation of zombies who can't do, or understand, basic math. We're talking simple things like multiply, or figure out how many Monsters they can buy for $5, or calculate change.

If they can't do basic arithmetic, there's no way they can understand the actual concepts underlying math, or have any type of number sense (and they don't; though, granted, neither do most of their elementary school teachers). It'll be absolutely horrible if they have no idea of how to craft a coherent argument, or deeply think because ChatGPT just spews out something (and who knows if it even gives factual answers; they'll never know to check). Couple this with other issues arising from over reliance and overuse of smart phones and we're setting ourselves, and them, up for a grim future facing serious existential problems (.i. climate change)


I have a question for someone with your perspective: something I've been wondering is should we be using programming to teach the underlying concepts of basic math? Would learning the algorithms of math be a more appropriate way of conveying the theory to students?


I think there is still some algorithm-based teaching, or at least the students I taught had been taught via algorithms. The simple O(n^2) multiplication one, or long division, or adding/subtracting with carrying/borrowing. I know a lot of the perceived complaints about the 'common core' change were a push away from this, though I never had any of those students (too young, and I left teaching two years ago, at least for now). That said, I think the common core way was a better way to go about it to help kids get a number sense.

I see so many kids who don't know that, for, say, 28 + 35 you can simply rearrange this to be 30+33. They have no concept of what addition means, or how you can regroup parts. Same with breaking down multiplication and how you can do it by parts and then sum it. They have no concept of how the numbers work, basically, and I don't think algorithms would give them that sense, it'd just give them steps to compute stuff.


Assembling the words is a skill you need, which is why you're taking a class that requires it.

To some extent, you are probably thankful your elementary school teachers didn't let you use a calculator to do stuff like 2 + 2 = 4 or the multiplication tables. Doing that quickly without any tools is useful in the real world.

Similarly, you will often have to argue for your point of view in real-time; there won't be time to craft a prompt and ask the AI what it thinks. "Why should we do this your way?" "Why should we hire you?" "Why should I let you go with just a warning?" It all matters.


I used to be told I needed to learn cursive. My parents had to learn how to operate a typewriter.

All of this is just various levels of abstraction. It’s still your responsibility to manage the final outcome, which is a far distance from “assembling words.”


I don't know if this is still the case, but my GCSE maths exams back in 2000 were split into "calculators allowed" and "calculators forbidden".

It can be useful to know how to use a tool, and can also useful to know how to do similar things without that tool.


Which cognitive dissonance ?

Some people are here to learn basic skills they'll use all their lives, the other is here to do his job


> The cognitive dissonance is wild to me

Maybe he found beauty in that irony.

I have no doubt progress will march on. I was telling a coworker that I think humans will have symbiotic relations with AIs in the future, and he noted, we will be the heart and AI will be the brains. I think that sums it up beautifully.


> The cognitive dissonance is wild to me, and I think approaching AI as a tool to be embraced is much healthier.

It's both healthy and realistic. Let's be real - it's not going away. It seems like it's mostly/somewhat useful now. It's only going to get better. People undoubtedly will find a way to leverage the tool to make themselves more productive. If that's the case, we should (1) continue to teach the skills exhibited by the machine learning models and (2) help students gain mastery over the tools so they can be productive when they start to work in a world where their peers are already using them.


Something I've been exploring for a while is that tools like ChatGPT are actually deceptively hard to use - or at least hard to use well.

This piece explores that idea in quite some depth when it talks about the different approaches to working with ChatGPT to generate an essay.

I see a lot of people try ChatGPT once, get disappointing results (maybe they gave it a logic puzzle or asked it to do some math!) and write it off as useless hype. I think that's a mistake.


Replace "Using AI" with "Asking your parents" and behold the breathtaking vacuousness and stupidity of this idea and how little students will actually learn from it. The wasteland that is students' utter lack of skills is troubling. "Prompt engineering" and all that. What utter tripe.


I think it's irresponsible and unethical to "require" students to use generative AI tools. These tools are highly controversial and pose severe cultural, moral, and increasingly legal dilemmas. It's like requiring students to take certain types of drugs to participate in a class, or learn how to use certain types of firearms. Unless the class is literally "how to take drugs" or "how to use guns" everyone would understandably flip a lid, yet this certainly wasn't a "how to use ChatGPT" class but another topic entirely (entrepreneurship if I'm not mistaken).

If I discovered a class I was planning to attend in academia was going to require the use of AI, I would launch a protest immediately. This is a violation of my human rights. If you're OK with using generative AI tools, that's your prerogative, but forcing me to use them is like forcing me to eat food grown with pesticides if I'm a staunch organic foods consumer…or possibly even worse, forcing a vegetarian to eat meat. Way-ay-ay out of line.


I understand this clearly is something you feel strongly about, but do you think this is somehow drastically different from using other tools for doing secondary research? As in, using an index tool (like the Dewey decimal system, or Google) to surface previously performed work, so you can then summarize it?

There is always a line to draw, why is this the point where you think it is important to draw a line?


That's actually a great example of why I agree with that person.

It is totally reasonable to say "you may use AI tools in this class", or by comparison, "you must do research in this class, here are some ways you can do research, it is not cheating to use Google".

It is a very different thing to require the use of AI tools by someone who can competently write an essay on their own, or to require the use of Google by someone who happens to have a bunch of relevant sources bookmarked. That's moving from "can you do this" to "can you use this specific tool", and away from the actual point of the class.

Another example of this I can remember was requiring specific IDEs for some courses, which invariably led to a quarter of the class time being wasted explaining buttons in a now long-outdated version of Eclipse in excruciating detail, instead of any actual programming concepts or even Java-specific concepts.


I think if you take that to another extreme, you could say that forcing someone to write code with a computer versus a typewriter is an arbitrary rule.

But there are actual reasons to enforce the tool - in that case, so you can execute your code in a similar environment to your peers. It is specific, but that's the point of the restriction.


I'd feel the same way about a job that requires the use of generative AI, nothing to do per se with a classroom setting or even education.


Well, jobs vs classes are very different. I'd also probably leave a job that gave me arbitrary assignments and tests and made me pay to be there, too, but that's generally fine in a classroom setting.


Not sure if you are joking.


This feels like something Sydney would say.


how is using generative AI tools immoral?


There is considerable controversy about using artists works as training data for creating an AI that competes with artists.

Just to take a top google hit on the subject: https://www.nbcnews.com/tech/internet/lensa-ai-artist-contro...



Switching to a system where students watch lectures at home and do homework in class was overdue anyway. I think that’s the real way to stop AI from writing everything.


The concept exists already and is known by the term flipped classroom [0]. One well known example is the Khan Academy

[0] https://en.wikipedia.org/wiki/Flipped_classroom


Yeah, there are a lot of positives. It's not just about stopping kids from cheating. It's a better system, in my opinion.


I don't think college students will like paying for that kind of arrangement.

Apply that model to high school and you'll just have teachers desperately trying to cram lessons into the beginning of class, to make up for the fact that 80% of the kids didn't watch the lectures.


I can't properly formulate why but something about this seems like it'd be my most hated class ever.


Interesting - if you do manage to formulate it I'd be curious to read that.

I like how they're taking a proactive approach in trying out what role ChatGPT and the like can play in generation and exploration of knowledge. I like the breakdown of the different ways of interacting with the tool, and the callout of how collaborative editing seems like the best out of the approaches they've witnessed. I also feel somewhat comforted by the remarks about how students engage with the output of the tools critically and fact checking can actually be a valuable way to interact with the material.

Personally I think I would enjoy this class - I like the openness to experimentation. I can imagine that _getting graded for it_ could be a frustrating experience though, I think with experimental classes that's a kind of thing for which there tends to be little standardization and it can be unpredictable how you'll be judged.


It does sound like a horrible class.

I think the issue with AI is that the average person doesn't understand how it works, and the problem is when somebody in a position of power becomes infatuated with it. I don't fear AI, but I fear bad-AI, or misused-AI.

Those who understand how it works know that its mostly Garbage In-Garbage Out


Indeed but it's also gold-in gold-out, something like that...

But on a more subtle note because conversational AI has this short feedback loop, much shorter than even regular search, I feel that it's something where people can level up quickly. I think there's going to be great value there.


He teaches at a business school - probably loved by the people who take the classes but not something that would appeal to anyone else


You could try chatgpt to help you formulate it properly. I'm not native so I've used it a few times to come up with a first draft. More often than not the result is pretty bad, but it's much easier to work with than a blank page. I remove 2/3 of what was generated and I write the rest myself, then ask the AI to point out spelling or grammatical mistakes. It's a bit tedious, but less so than having to translate or writing in a foreign language by myself.


My guess is, it sounds like a lot of unpleasant empirical tinkering exercises.


Yikes.. In my professional teaching career I always tried to not teach the tools, but the concepts. Tools are things that develop. Students can find the right tools just fine.

Teaching how to use ChatGPT is exactly what I would advice against. There's no science behind it, except some knowledge that's based on personal experiments. It doesn't provide much educational progress.

I mean, one class for this would be OK. But basing all your classes on a tool?! Not in my class!


I believe the intent behind implementing such a policy is admirable. By teaching students practical use of A.I. tools, their current limitations and biases, and how to refine the prompts and results will be a necessary skill. However, I also find it problematic.

> As a result, the projects this semester are much better than previous pre-AI classes.

The use of the term "better" is concerning. According the blog, the subject of the class is "innovation". He is essentially praising the A.I. ability to innovate and not his students. If the object of school is to learn, what did his students learn about innovation and how was that measured despite ChatGPT? It reads as if his students learned about using and optimizing a tool and not the subject at hand.

The common argument for A.I. is that it will free humans to pursue more interesting and creative activities. The pervasive theme among many of these articles suggests creativity and thought to become a commodity, where the role of the human is to simply intervene and edit. Many architects are beginning to forfeit design to Midjourney, and editing the results. We need to ask ourselves: is authoring or editing more worthwhile?


The author teaches entrepreneurism, among other things. Perhaps I am simply jaded, but, if this "becomes a thing" how long before VC investments become mostly robotic, with decks generated by AI, reviewed by AI, recommended by AI, etc?

I understand the importance of knowing how AI is being used in one's field, of being aware of it, knowing its limits, and, where applicable, knowing how to detect and counter it, but I'm not sure that simply considering it a tool in one's chest without first having some level of expertise in its limits is wise.

(I'm fortunate enough to work in a space where artisanal hand crafting is still the norm. The writing may be on the wall, but the wall is distant yet.)


I started cheating on book reports in the 3rd grade when I realized there's no way the teacher can know about every children's book, and as long as I picked something relatively new and obscure, I'd be safe.

In later years, we had to write reports on books everyone read, this strategy broke down. Cliff notes were a thing back then, but mostly I just skipped around through some chapters, talked with other students, and wrote the reports based off very superficial knowledge of the story with a few examples sprinkled in.

10/10 would use this to not have to suffer reading Shakespear or whatever the book du jour is, or whatever other topic I don't care about.


I love that we're already seeing some professors being intellectually honest about the future of AI. Knowing how to use AI, and knowing its limitations will be a needed skill in years ahead.

The analogy I like is that it's sort of like calculators: they were academically shunned at first, and are now an integral part of learning advanced maths. Sorry, Mrs. Kelly, but we did end up having calculators with us always after all.


It seems premature to teach the limitations of ChatGPT in a course. Five years from now its capabilities and limitations may be very different. But I guess if you realistically believe that students are going to use it anyway, you have to do something.


Will be interesting to see where the needle lands on the Crutch <-> Tool scale for AI.


A more practical AI skill to teach would be to install face and voice recognition devices in the class and let them spy on one another. Catch those that have uttered disallowed words and punish them.


One of the comments on the article stated, "What a pleasurable read.". I have no idea if this article was generated using the method described, but this gave me pause in the era of ChatGPT/AI.

Can something be pleasurable to read/view/listen to/etc., knowing the author of the work doesn't, for lack of a better term, have a soul? I don't know.

Perhaps this drifts too far into philosophy, but I myself find it adds something... more, knowing that a human being put their thoughts/feeling/heart into a piece of work. This is certainly, in part, informed by my theistic beliefs, but I recognize that others (like Sam Harris) wouldn't delineate at all between human thought processes and something like ChatGPT.

That being said, I would have to imagine some portion of people also feel something "other" about human connection that brings works of art/literature/music/film/etc. an undeniable beauty BECAUSE of their human-origin.


To me, what matters most about the media I consume is my personal emotional response to it. Whether a sentence is generated by an AI or a human, it has an equivalent impact on my feelings.

--

For me, at least, the important part of media I consume is how I feel about it. If an AI writes a sentence and a human later writes the same sentence, they have the same effect on me.

--

To me, what matters most about the media I consume is how it makes me feel. Whether a sentence is written by an AI or a human, it has the same impact on my emotions.

--

I wrote one of these. The other two are ChatGPT's rewording of the same idea, and a second rewording with "try to sound as human as possible". They are in an arbitrary order. Does one of them make you feel different than the other?


I for one am completely uninterested in AI-generated "art" (including writing, music, etc) because of the lack of a human who is communicating something. I'm concerned it will be increasingly difficult to tell what is worth my while.


Yes.

I keep going back to Stephen King's definition of writing as telepathy. The machine does not think.

That doesn't make it bad, it just makes it meaningless. I was able to use it to identify an obscure 90s B movie and that's pretty cool.

But in terms of actual meaning, you might as well be talking to a tree.

There's a long and storied history of humans who talk to trees and project deep meaning on their answers, so we're probably doomed to fall for the delusion anyway.


> Can something be pleasurable to read/view/listen to/etc., knowing the author of the work doesn't, for lack of a better term, have a soul? I don't know.

I find pleasure in looking at gorgeous natural landscapes: mountain ranges, forests, waterfalls, creeks, glaciers, the sky at night. I also enjoy eating fresh blackberries.

I enjoy them even though they don't have a soul or a human creator.


This article is a great intro to prompt engineering.

It helped me with my own prompts!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: