Hacker Newsnew | past | comments | ask | show | jobs | submit | andyg_blog's commentslogin

I don't think it contradicts the OP. OP says the system is unreliable. Memory leaks that lead to out of memory failures for example. Smart pointers would stabilize things. (Also note that OP says their smart pointers PR was rejected).


That's a generalized statement. Smart pointers can stabilize things, if used wrongly they can cause just as many issues. Sprinkling in smart pointers such that there is now mixed use with smart and raw pointers can cause double frees, and huge maintenance issues. So, creating a single PR to introduce smart pointers in my opinion is not necessarily "stability". He should have created an architecture plan and got upstream and downstream aligned.


Completely agree on alignment. Without it, it's a shortcut to rejection. I actually wrote a lot about this in a blog post I called "Minimum Reviewable Unit" https://gieseanw.wordpress.com/2025/03/21/minimum-reviewable...


This is a small April Fools' project I created while exploring git commit hooks. I wondered if you could fail a commit based on the commit hash itself. Turns out you can't; you have to wait until the post-commit hook fires, and then do a reset on failure. Eventually I decided to see just how far I could take it, and well, now there's an official website and python sdk distributed through pip.


>the whole article assumes the only language in the world is Python.

This was my take as well.

My company recently started using Dspy, but you know what? We had to stand up an entire new repo in Python for it, because the vast majority of our code is not Python.


I think this is an important point! I am actually a big fan of doing what works in the language(s) you're already using.

For example: I don't use Dspy at work! And I'm working in a primarily dotnet stack, so we definitely don't use Dspy... But still, I see the same patterns seeping through that I think are important to understand.

And then there's a question of "how do we implement these patterns idiomatically and ergonomically in our codebase/langugage?"


Out of curiosity, what are you finding success with in dotnet land? My observation is that it's not clear when Semantic Kernel is recommended versus one of multiple other MSFT newly-branded creations


Agent Framework + middleware + source generation is the way to go.

Agent Framework made middleware much easier to work with.

Source generation makes it possible to build "strongly typed prompts"[0]

Middleware makes it possible to substitute those at runtime if necessary.

[0] https://github.com/CharlieDigital/SKPromptGenerator/tree/mai...


we have been using Agent Framework. I also have been eyeing LlmTornado. Personally, I find dotnet as a whole hard to implement the kind of abstractions I want to have to make it ergonomic to implement AI stuff.

I've been fiddling around with many prototypes to try to figure out the right way to do this, but it feels challenging; I'm not yet familiar enough with how to do this ergonomically and idiomatically in dotnet haha


Why did you do that instead of using Liquid templates?


Really appreciate it. I love mixing humor with technical communication. This post is topical because strangely I encountered GUID truncation three times in the last week.


I'm skeptical these status ladders truly exist outside of the author's imagination, but then again I've never been part of that side of tech culture. It doesn't ring true of my own experience where pretty much every technical person sees other technical people on equal footing. This includes "big names" in tech that I've spoken with.

Edited to add: what's hypothetical about Alice being happy to run a coffee shop or Bob satisfied being a 90th percentile engineer (measured how?)? Plenty of these people exist, I've met them!


A more general rule is to push ifs close to the source of input: https://gieseanw.wordpress.com/2024/06/24/dont-push-ifs-up-p...

It's really about finding the entry points into your program from the outside (including data you fetch from another service), and then massaging in such a way that you make as many guarantees as possible (preferably encoded into your types) by the time it reaches any core logic, especially the resource heavy parts.


That's almost the same thing as parse don't validate: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


Doesn't this obfuscate what assumptions you can make when trying to understand the core logic? You prefer to examine all the call chains everywhere?


The ”core logic” of a program is what output it yields for a given input.

If you find a bug, you find it because you discover that a given input does not lead to the expected output.

You have to find all those ifs in your code because one of them is wrong (probably in combination with a couple of others).

If you push all your conditionals up as close to the input as possible, your hunt will be shorter, and fixing will be easier.


This is why we invented type systems. No need to examine call chains, just examine input types. The types will not only tell you what assumptions you can make, but the compiler will even tell you if you make an invalid assumption!


You can't shove every single assumption into the type system...


You can and should put as many as you can there

https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

If instead of validating that someone has sent you a phone number in one spot and then passing along a string, you can as easily have a function construct an UncheckedPhoneNumber. You can choose to only construct VerifiedPhoneNumbers if the user has gone through a code check. Both would allow you to pluck a PhoneNumber out of them for where you need to have generic calling code.

You can use this sort of pattern to encode anything into the type system. Takes a little more upfront typing than all of those being strings, but your program will be sure of what it actually has at every point. It's pretty nice.


Yep! I have seen so much pushed into a type system that in the end there was hardly any code needed to do validation or scaffolding… to the point where it felt magical


You can express a lot of concepts just through types in languages with richer type systems.


Even without a rich type system you can express a lot of things just through naming.

You just can't enforce those assumptions.


You can enforce them (statically) by other means if you’re determined enough, eg by using lint rules which enforce type-like semantics which the type system itself doesn’t express.


This does rely on the language having a sophisticated-enough type system to be able to extract enough type information for the rules to work in the first place.


True, but there are still documented interface contracts you can program against. The compiler won’t catch violations of the non-type parts, but the requirements are still well-defined with a proper interface contract. It is a trade-off, but so is repeating the same case distinction in multiple parts of the program, or having to pass around the context needed to make the case distinction.


You can at least shove them into the constructors.


[flagged]


Leave the dingos alone


  > with admirable tunnel vision, bullheadedness and mission for maximally general algebraic and arbitrary constraint type systems.
I believe they're called keyhole optimizations, greedy search, and "the customer is always right..."


The idea and examples are that the type system takes care of it. The rule of thumb is worded overly generally, it's more just about stuff like null checks if you have non-nullable types available.


No I don't think so because if you make your assumptions early then the same assumptions exist in the entire program and that makes them easy to reason about


If you’ve massaged and normalized the data at entry, then the assumptions at core logic should be well defined — it’s whatever the rules of the normalized output are.

You don’t need to know all of the call chains because you’ve established a “narrow waist” where ideally all things have been made clear, and errors have been handled or scoped. So you only need to know the call chain from entry point to narrow waist, and separately narrow waist till end.


Markdown that is then unceremoniously shoveled to WordPress with some finagling of the images. I'm not trying to experiment with fancy tech when I write, just trying to get the words out of my head.

If something gets mathy I'll use LaTex.


You can downvote me all you want. I'm not claiming any kind of superiority of WordPress but merely answering the question in the OP. I've had this blog for 15 years now, long before WordPress was so reviled. It just works and I'm still on their free tier.


Please elaborate? I define experience in terms of mostly "time" spent on something. And I consider any engineer with less than 5 yrs of experience as "inexperienced" regardless of whether they are talented or not. I've met many talented, but inexperienced engineers who still needed redirecting.


What amount of time do you think it took to divise a combination of advanced imaging techniques and machine learning to decode the Herculaneum scroll?


I agree that we should have safe-by-default "decay" behavior to a plain ol std::string, but I'm also picking up that many aren't certain it's a useful syntactic sugar in top of the fmt lib? Many other languages have this same syntax and it quickly becomes your go-to way to concatenate variables into a string. Even if it didn't handle utf-8 out of the box, so what? The amount of utility is still worth it.


Greek mythology? But seriously please elaborate for my less educated self.


it tests syllogistic reasoning: Jason's mother was Tyro, whose father was Poesidon, whose father was Kronos. it also tests whether it "eagerly" rather than comprehensively considers something: a maternal great-grandfather could be the father of either one's maternal grandmother or maternal grandfather. so the answer could also be king Aeolus of the Etruscans.

ideally a model would be able to answer this accurately and completely.


I think there are more possible answers? Jason's mother differs depending on the author...

For example, Jason's mother was Philonis, daughter of Mestra, daughter of Daedalion, son of Hesporos. So Jason's maternal great-grandfather was Hesporos.


LLMs often don't do well on tasks that require composition into smaller subtasks. In this case there is a chain of relations that depend on the previous result.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: