Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been using this prompt on articles that generate debate. Like microservices, or jwt's. It brings up some interesting points for this article...

Look at this article and point out any wording that seems meant to push a certain viewpoint. Note anything important the author leaves out, downplays, or overstates, including numbers that seem cherry-picked or lack context. Clearly separate basic facts from opinions or emotional language. Explain how people with different viewpoints might read the article differently. Also call out any common persuasion tactics like loaded wording, selective quotes, or appeals to authority.





Who would win, the combined efforts of the best scientists in the field, or innuendo from a fancy markov model?

You could at least paste the points here.


For me, it’s a way to break down and analyze articles more critically, not to pick a side.

That only works if:

1. You assume that your LLM of choice is perfect and impartial on every given topic, ever.

2. You assume that your prompt doesn't interfere with said impartiality. What you have written may seem neutral at first glance, but from my perspective, a wording like yours would probably prime the model to try to pick apart absolutely anything, finding flaws that aren't really there (or make massive stretches) because you already presuppose that whatever you give it was written with intent to lie and misrepresent. The wording heavily implies that what you gave it already definitely uses "persuasion tactics", "emotional language" or that it downplays/overstates something - you just need it to find all that. So it will try to return anything that supports that implication.


you're reading to much into it. i make no assumptions.

It doesn't matter if you make assumptions or not - your prompt does. I think the point of failure isn't even necessarily the LLM, but your writing - because you leave the model no leeway or a way to report back on something truly neutral or impartial. Instead, you're asking it to dig up any proof of wrongdoing no matter what, basically saying that lies surely exist in whatever you post, and you just need help uncovering all the deception. When told to do this, it would read absolutely anything you give it in the most hostile way possible, stringing together any coherent-sounding arguments that would reinforce the viewpoint that your prompt implies.

I think this reads to me as a way for you to couch your ignorance as criticism while learning nothing from reading a study like this. Why not do this for your own biases?

What metrics do you focus on while reading an article that result in you confirming your own preconceived ideas?

If you have to come at an article like this in a hostile way, then you're not learning anythign about it, you're just confirming your own biases. I think I would recommend that you focus all of these criticisms inward at your own biases in terms of what you react to and need to explain and see if it's explained in the paper above. Then see if you find yourself convinced by the scientific method that they undertook?

Otherwise you're prepping yourself to continue living in an echo chamber.


i'm not even talking about the article. are you a bot?

“It brings up some interesting points for this article...”

Why are you acting like a LLM that had its own earlier statements run off the end of the context window and can’t remember you yourself said them?


Note that LLMs can easily deduce what your biases are based on your prompt and give you only information that confirms your biases.

Thank you!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: