Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you think about core business logic (or at least, _significant_ business logic) being embedded within prompts like here: https://github.com/langmanus/langmanus/blob/main/src/prompts...

Do you think or worry about not-being able to test these things? (Or is that just me :))

Details: I ack/understand this comes from a dependency (ReAct agents); not directly langmanus.

But, still, curious what the community/hn-tech thinks of testability, veracity, potentially conflicting or overlapping instructions across agents, etc, wrt “prompts” as sources of logic. Ack its a general practice with LLMs.



Thought the logic was going to go with cpu and inference related tasks will use GPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: