Hacker Newsnew | past | comments | ask | show | jobs | submit | illuminator83's commentslogin

Especially since the US is not going to have any allies anymore soon.

I'm hoping for a future in which humankind looks back with embarrassment at this silly period in its history in which people used to think a leaky and bad abstractions like garbage collection was ever a good approach to deal with resource life-times.


Still the whole world runs on GC-ed languages so it must be an abstraction at least some people like to work with.

And I'm pretty sure using a GC in some cases it's the only option to not go crazy.


I think we are just used to it. Like we are used to so many suboptimal solutions in our professional and personal lives.

I mean, look something like C++ or the name "std::vector" specifically. There are probably 4 Trillion LoC containing this code out there - in production. I'm used to it, doesn't make it good.


Monkey's paw: you get your wish, but so does someone who wants RAII and single-use-malloc to be left behind as a leaky and bad abstractions.

We all happily march into a future where only arena allocation is allowed, and when the arena is overfull it can only be fully reset without saving data. Copying still-used data out if it before reset is not allowed, as that's a copying half-space garbage collector. Reference counting is of course not allowed either as that's also garbage collection. Everyone is blessed...?


Well, to be fair, RAII is a leaky abstraction. For example, if your programme crashes there's no guarantee that you'll ever give the resources back.

See https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...


> See https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...

This example is specific to C++

> (..) if your programme crashes there's no guarantee that you'll ever give the resources back.

What guarantees can you have from a "crashing program", and by what definition of crashing?

> RAII is a leaky abstraction

Any abstraction is leaky if you look close enough.


> What guarantees can you have from a "crashing program", and by what definition of crashing?

You might like https://www.usenix.org/conference/hotos-ix/crash-only-softwa...


Some problems are just fundamentally easier to solve using cyclic data structures whose lifetime exceeds the scope where they were created, which would be quite difficult to clean up properly in any other way.


Indeed. I also hope we stop using all of these "high-level" languages. So much overhead just so people don't have to learn how to write proper optimized machine code. It's super-trivial to write a website directly in that too, and it only takes a bit longer, but it is almost twice as fast.


I'm a big fan of high-level languages and abstractions. I'm just not a fan of bad abstractions.


Did you know the Linux kernel has a tracing garbage collector in it, specifically for Unix socket handles? It seems to be a recurring solution to a common problem.


There are lots of suboptimal solutions for lots of problems out there. I don't know why it would matter if the Linux Kernel does the same mistake. And I'm sure that wasn't the only solution. Just something somebody implemented and noone bothered to change it because it worked "well enough". But I wouldn't be surprised if this is known to cause the kind of issue GCs are known to cause such as race conditions, resource exhaustion and stalling.

Let me do some quick research:

https://gist.github.com/bobrik/82e5722261920c9f23d9402b88a0b... https://nvd.nist.gov/vuln/detail/cve-2024-26923


I do not know the guy, and I do not care who he is. This really is not "slop". I can attest to the validity of almost all of his points based on my own career. And even if he used ChatGPT assistance to help with the writing, the content clearly was not invented by ChatGPT. This is valuable advice for people in our industry.


You must not have many engineering leaders in your LinkedIn. These are all rote points that are spouted on there daily.


Are you sure? I've been confidently wrong about stuff before. Embarrassing, but it happens.. And I've been working with many people who are sometimes wrong about stuff too. With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake".


True, but people can use classifier words like "I think …" or "Wasn't there this thing …", which allows you to judge their certainty about the answer.

LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that.


Yeah, for the most part. But I've even had a few instance in which someone was very sure about something and still wrong. Usually not about APIs but rather about stuff that is more work to verify or not quite as timeless. Cache optimization issue or suitability of certain algorithms for some problems even. The world is changing a lot and sometimes people don't notice and stick to stuff that was state-of-the-art a decade ago.

But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests.


It’s different. People don’t just invent random API that doesn’t exist. LLM does that all the time.


For the most part, yes. Because people usually read docs and test it on their own.

But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that. Or telling me about how some subsystem could be tested. When it didn't work like that at all. Because they operated from memory instead of checking. Or confused one tool/system for another.

LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm.

If you let an LLM do that it won't be much of a problem either. I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps.


Again with people it is a rare occurrence. LLM does that regularly. I just can’t believe anything it says


I do agree. I still think that the article articulates a very interesting thought... the better the input for a problem, the better the output. This applies both to LLMs but also for colleagues.


It's the tragedy of the commons all over again. You can see it in action everywhere people or communities should cooperate for the common good but don’t. Because many either fear being taken advantage of or quietly try to exploit the situation for their own gain.


The tragedy of the commons is actually something else. The problem there comes from one of two things.

The first is that you have a shared finite resource, the classic example being a field for grazing which can only support so many cattle. Everyone then has the incentive to graze their cattle there and over-graze the field until it's a barren cloud of dust because you might as well get what you can before it's gone. But that doesn't apply to software because it's not a finite resource. "He who lights his taper at mine, receives light without darkening me."

The second is that you're trying to produce an infinite resource, and then everybody wants somebody else to do it. This is the one that nominally applies to software, but only if you weren't already doing it for yourself! If you can justify the effort based only on your own usage then you don't lose anything by letting everyone else use it, and moreover you have something to gain, both because it builds goodwill and encourages reciprocity, and because most software has a network effect so you're better off if other people are using the same version you are. It also makes it so the effort you have to justify is only making some incremental improvement(s) to existing code instead of having to start from scratch or perpetually pay the ongoing maintenance costs of a private fork.

This is especially true if your company's business involves interacting with anything that even vaguely resembles a consolidated market, e.g. if your business is selling or leasing any kind of hardware. Because then you're in "Commoditize Your Complement" territory where you want the software to be a zero-margin fungible commodity instead of a consolidated market and you'd otherwise have a proprietary software company like Microsoft or Oracle extracting fees from you or competing with your hardware offering for the customer's finite total spend.


About 7 or 8 years ago I worked at a startup which got money from Softbank / Masayoshi Son. Our founder and our CTO went to meet him in LA IIRC to pitch.

They came back telling us he was basically asleep during the pitch meeting which was scheduled for only 10 minutes anyway.

Our business/product really had no chance of succeeding at this point and most knew it. We got some money from Softbank anyway - forgot how much. Our management was basically laughing about how easy it was to get funding from Softbank.

I jumped ship a year later or so and that was good timing.


I think it's mostly the fact that C dependencies are much rarer and much harder to add and maintain.

The average C project has at most a handful of other C dependencies. The average Rust, Go or NodeJS project? A couple hundred.

Ironically, because dependency management is so easy in modern languages, people started adding a lot of dependencies everywhere. Need a leftpad? Just add one line in some yaml file or an "Alt-Enter" in an IDE. Done.

In C? That is a lot more work. If you do that, you do it for advanced for stuff you absolutely need for your project. Because it is not easy. In all likelihood you write that stuff yourself.


CVE-2024-3094 is it? You can argue that in C it is much easier to obfuscate your exploit. Implementing something in C is also a lot more work, so you might be also inclined to use 3rd party library.


I never found it hard to add a C library to my projects using pkg-config. And yes, when the package came from Debian I have some trust that it is not a huge supply chain risk.

I think the problem started with the idea over language-level managers that are just github collections instead of curated distribution-level package managers. So my response "C has no good package manager" is: It should not have a packager manager and Cargo or npm or the countless Python managers should all not exist either.


pkg-config isn’t the hard bit though, is it?

Usually the hard bit with C libraries is having dependencies with dependencies all of which use their own complex build systems, a mix of Make, CMake, Autotools, Ninja, etc.

Then within that for e.g. a mix of using normal standard names for build parameters and not e.g. PROJECTNAME_COMPILER instead of CMAKE_C_COMPILER


The package manager takes care of the dependencies. And one does not need to compile the libraries one uses, so how complicated this is does not matter. I install the -dev package and I am done. This works beautifully and where it does not the right move would be to fix this.


I think in most of my projects, many of the C++ packages I used for work (lots of computer vision, video codecs etc) I had to compile and host myself. The latest and greatest of OpenCV, dlib or e.g. gstreamer weren't available on the distros I was using (Ubuntu, Fedora, Centos). They'd lag a year or more behind sometimes. Some stuff was outright not available at all via package manager - in any version.

So, yes, you do have to figure out how to build and package these things by yourself very often. There are also no "leftpad" or similar packages in C. If you don't want to write something yourself.

In constrast - virtually every software package of any version is available to you in cargo or npm.


Virtually every package is in cargo and npm because there is no curation. This is exactly why it is a supply-chain risk. The fix is to have a curated list of packages, but this is what Linux distribution are. There is no easy way out.


Also pretty sure it is a feature because the general population wants to have pleasant interactions with their ChatGPT and OpenAI's user feedback research will have told them this helps. I know some non-developer type people which mostly talk to ChatGPT about stuff like

- how to cope with the sadness of losing their cat

- ranting about the annoying habits of their friends

- finding all the nice places to eat in a city

etc.

They do not want that "robot" personality and they are the majority.


Agreed on all points.

I also recall reading a while back that it's also a dopamine trigger. If you make people feel better using your app, they keep coming back for another fix. At least until they realize the hollow nature of the affirmations and start getting negative feelings about it. Such a fine line.


I'm assuming "High Availability" is what is really meant here.


When you are commenting your schema, that's true. Anything which is generated by machines doesn't need comments either. But when it's written by people? And the values? That belongs with the 'payload'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: