I'm pleased they rename it because grid-lanes opens up more than masonry layouts.
I've been waiting to be able to have a fully responsive layout (where interleave sidebar content in with the page content on small screens, but have it in a sidebar on larger screens) without using any JavaScript, and this finally makes it possible.
The naming was half the discussion on implementing this. There were a lot of people smarter and more knowledgeable than me that had a lot of opinions on the name that I hadn't thought about. I remember one of the reasons was that the word "masonry" wasn't as obvious a concept for non-English speakers.
> I beginning to think most "advanced" programmers are just poor communicators.
This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code.
I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least).
Understandably people that prefer engineered solutions do not like the idea of working this way very much.
The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this.
I have a Personal Theory(tm) that modern media in general has far too little value or accountability for something that has such a massive amount of leverage.
While I would love for this to be true for financial and egotistical reasons, I have a growing feeling that this might not be true for long unless progress really starts to stall.
I've actually gone in the other direction. A year ago, I had that feeling, but since then I've gotten more certain that LLMs are never going to be able to handle complexity. And complexity is still the real problem of developing software.
We keep getting more cool features in the tools, but I don't see any indication that the models are getting any better at understanding or managing complexity. They still make dumb mistakes. They still write terrible code if you don't give them lots of guardrails. They still "fix" things by removing functionality or adding a ts-ignore comment. If they were making progress, I might be convinced that eventually they'll get there, but they're not.
Yeah but on the other hand there are plenty of human programmers that are bad at understanding complexity, make dumb mistakes, and write terrible code. Is there something fundamentally different about their brains to mine? I don't think so. They just aren't as good - not enough experience, or not enough neurons in the right places or whatever it is that makes some humans better at things than others.
So maybe there isn't any fundamental change needed to LLMs to take it from junior to senior dev.
> They still "fix" things by removing functionality or adding a ts-ignore comment.
I've worked with many many people who "fix" things like that. Hell just this week, one of my colleagues "fixed" a failing test by adding delays.
I still think current AI is pretty crap at programming anything non-trivial, but I don't think it necessarily requires fundamental changes to improve.
this whole analogy is so tired. "LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well". let's put aside the obvious bad logic and think for one second about WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences. something LLM categorically cannot do
> LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well
Not what I said. The correct logic is "LLMs are stupid, but that doesn't prove that they MUST ALWAYS be stupid, in the same way that the existence of stupid people doesn't prove that ALL people are stupid".
> let's put aside the obvious bad logic
Please.
> WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences.
What? No it isn't. It's partly because they have lots of practice and learned from experience. But it's also partly natural talent.
> something LLM categorically cannot do
There's literally a step called "training". What do you think that is?
The difference is that LLMs have a distinct off-line training step and can't learn after that. Kind of like the Memento guy. Does that completely rule out smart LLMs? Too early to tell I think.
Completely off topic but when fonts are the size they are in this article I can't read it, the words don't register as words above a certain size. I assume this isn't normal or it wouldn't be so common...
reply