Hacker Newsnew | past | comments | ask | show | jobs | submit | interstice's commentslogin

Kinda odd they didn't call it masonry given it's already been called that forever. At least grid-lanes is reasonably self explanatory.

I'm pleased they rename it because grid-lanes opens up more than masonry layouts.

I've been waiting to be able to have a fully responsive layout (where interleave sidebar content in with the page content on small screens, but have it in a sidebar on larger screens) without using any JavaScript, and this finally makes it possible.

Demo: https://codepen.io/pbowyer/pen/raLBVaV

Previous comment: https://news.ycombinator.com/item?id=46228993


The naming was half the discussion on implementing this. There were a lot of people smarter and more knowledgeable than me that had a lot of opinions on the name that I hadn't thought about. I remember one of the reasons was that the word "masonry" wasn't as obvious a concept for non-English speakers.

If only this was the actual standard for journalism and not copy pasting half understood content with additional spin.

If what you are typically reading is

>[copypasta] half understood content with additional spin

then what you are reading is not journalism.


> then what you are reading is not journalism

In most cases, if you aren't paying for it, it is not journalism.


Mux was getting a bit too expensive for the way we were using it, so I wrote a Sanity plugin for video hosting on Bunny CDN, it’s been fantastic!

> I beginning to think most "advanced" programmers are just poor communicators.

This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code.

I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least).

Understandably people that prefer engineered solutions do not like the idea of working this way very much.


When you oversee a team technically as a tech lead or an architect, you need communication skills.

1. Basing on how the engineer just responded to my comment, what is the understanding gap?

2. How do I describe what I want in a concise and intuitive way?

3. How do I tell an engineer what is important in this system and what are the constraints?

4. What assumptions will an engineer likely make that are will cause me to have to make a lot of corrections?

Etc.. this is all human to human.

These skills are all transferrable to working with an LLM.

So I guess if you are not used to technical leadership, you may not have used those skills as much.


The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this.

I have a Personal Theory(tm) that modern media in general has far too little value or accountability for something that has such a massive amount of leverage.

I think you may have missed the part where mainstream media is no longer actually mainstream for that reason.

Honestly for the best. Fox is never going to tell you when there's water main work on your street. A local paper might.


If true I’d like to know who is doing this so I can have exactly nothing to do with them.

Then why do they have all this?

While I would love for this to be true for financial and egotistical reasons, I have a growing feeling that this might not be true for long unless progress really starts to stall.


I've actually gone in the other direction. A year ago, I had that feeling, but since then I've gotten more certain that LLMs are never going to be able to handle complexity. And complexity is still the real problem of developing software.

We keep getting more cool features in the tools, but I don't see any indication that the models are getting any better at understanding or managing complexity. They still make dumb mistakes. They still write terrible code if you don't give them lots of guardrails. They still "fix" things by removing functionality or adding a ts-ignore comment. If they were making progress, I might be convinced that eventually they'll get there, but they're not.


Yeah but on the other hand there are plenty of human programmers that are bad at understanding complexity, make dumb mistakes, and write terrible code. Is there something fundamentally different about their brains to mine? I don't think so. They just aren't as good - not enough experience, or not enough neurons in the right places or whatever it is that makes some humans better at things than others.

So maybe there isn't any fundamental change needed to LLMs to take it from junior to senior dev.

> They still "fix" things by removing functionality or adding a ts-ignore comment.

I've worked with many many people who "fix" things like that. Hell just this week, one of my colleagues "fixed" a failing test by adding delays.

I still think current AI is pretty crap at programming anything non-trivial, but I don't think it necessarily requires fundamental changes to improve.


this whole analogy is so tired. "LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well". let's put aside the obvious bad logic and think for one second about WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences. something LLM categorically cannot do


Wow so much wrong in such a short comment.

> LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well

Not what I said. The correct logic is "LLMs are stupid, but that doesn't prove that they MUST ALWAYS be stupid, in the same way that the existence of stupid people doesn't prove that ALL people are stupid".

> let's put aside the obvious bad logic

Please.

> WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences.

What? No it isn't. It's partly because they have lots of practice and learned from experience. But it's also partly natural talent.

> something LLM categorically cannot do

There's literally a step called "training". What do you think that is?

The difference is that LLMs have a distinct off-line training step and can't learn after that. Kind of like the Memento guy. Does that completely rule out smart LLMs? Too early to tell I think.


> There's literally a step called "training". What do you think that is?

oh wow they use the same word so they must mean the same thing! hard to argue with that logic :)


Progress has stalled already, I didn't see much improvement in the past year for my real world tasks


Wealth gives more options in most if not all of those scenarios, so I think it’s hard to use it to test against many in vitro situations, possibly.


Completely off topic but when fonts are the size they are in this article I can't read it, the words don't register as words above a certain size. I assume this isn't normal or it wouldn't be so common...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: