Hacker Newsnew | past | comments | ask | show | jobs | submit | blackbrokkoli's commentslogin

Variations of this are a common talking point in the self-help world, and while it's a powerful antidote against "I'm sure some day this giant thing will suddenly be easy and I'll just do it", it's not a silver bullet. Here are some counter-considerations:

- Doing anything usually involves prep work. Want to take a step? First put on your shoes (literally or figuratively, depending). If your attempted habit is 70% prep, your brain will somewhat rightfully conclude "this is stupid" fairly quickly.

- "Just do X every day for [long time period]" has an inherent falsification problem: You aren't "allowed" to argue against it until you tried it. Stopped after 2 years because you saw no change (and 5 was recommended)? You are still not allowed to argue against the strategy!

- You can actually make steps so small that they're useless. I once set out to have (at least) one github commit online per day (going for that green tile!). This led to my brain finding hacks like rephrasing one sentence of an old blog post. Doing that for 20 days is way less effective than one single coding session, at 20 times the emotional cost.

- Doing something daily for a long time is extremely hard to achieve, especially if it's not the main thing you're doing. It's rare in the wild. You will find piano virtuosos who play piano daily, but not piano virtuosos who also go to the gym daily.


> Doing anything usually involves prep work. Want to take a step? First put on your shoes (literally or figuratively, depending). If your attempted habit is 70% prep, your brain will somewhat rightfully conclude "this is stupid" fairly quickly.

Note that this is also something that can be weaponized. Recently I've learned to draw and I found I kept having great difficulty just starting. To get over that I made the agreement with myself that at least once every two days, I would grab a pencil and page through my sketchbook. I'd find myself on the first blank page holding a pencil.

Turns out your brain thinking prep work without actual work is stupid really helps here. Once you've tricked yourself into doing the prep work, you might as well do the work-work.

e.g. for distance running: just make the deal with yourself that putting on your running clothes/shoes/etc and taking one step outside counts as having ran that day. You'll find yourself going for a run anyways once you get outside, because you might as well.

> "Just do X every day for [long time period]" has an inherent falsification problem

Very true, but unfortunately a lot of things worth doing require that sort of investment. When learning to draw I hated every single second for the first ~two months or so. And then like a switch getting flipped I started having fun.

> You can actually make steps so small that they're useless.

You should take the biggest steps you can actually keep yourself to. Maybe that leads to steps that are sub-optimally small, but taking useless steps is still doing more than taking no steps.

> Doing something daily for a long time is extremely hard to achieve

Oh for real, especially once you factor in force majeure. Hence why I went with "draw at least once every two days". That gives you wiggle room to plan around life events.

Turns out building habits is incredibly hard and no amount of seeking advise will do it for you. It's a slog and you gotta overcome that yourself one way or another.


Q: What’s the smallest step I can take towards my goal?

A: Spend a minute stressing about my goal.


If you aren't any closer to the goal after the step than you were before it, you didn't take a step towards the goal.


It's a double standard because it's apples and oranges.

Code is an abstract way of soldering cables in the correct way so the machine does a thing.

Art eludes definition while asking questions about what it means to be human.


I love that in these discussions every piece of art is always high art and some comment on the human condition, never just grunt-work filler, or some crappy display ad.

Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.


Exactly! Europa Universalis is a work of art, and I couldn't care less if the horse that you can get as one of your rulers is aigen or not. The art is in the fact that you can get a horse as your ruler.


In this case it's this amazing texture of newspapers on a pole: https://rl.bloat.cat/preview/pre/bn8bzvzd80ye1.jpeg?width=16... Definitely some high art there.


I agree, computer graphics and art were sloppified, copied and corporate way before AI, so pulling a casablanca "I'm shocked, shocked to find that AI is going on in here!" is just hypocritical and quite annoying.


Yeah this was probably for like a stone texture or something. It "eludes definition while asking questions about what it means to be human".


That's a fun framing. Let me try using it to define art.

Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.

Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.

> while asking questions about what it means to be human

I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say

> the machine does a thing

to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.

This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?


Are you telling me that, for example, rock texture used in a wall is "asking questions about what it means to be human"?

If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?

And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?

Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.


The images clair obscur generated hardly "eludes definition while asking questions about what it means to be human.".

The game is art according to that definition while the individual assets in it are not.


> Art eludes definition while asking questions about what it means to be human.

All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.


You're just someone who can't see the beauty of an elegant algorithm.


Speak for yourself.

I consider some code I write art.


The obfuscated C competition is definitely art


Is anyone else detecting a phase shift in LLM criticism?

Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.

No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating


No, AFAICT, AI hate has been common (but not the majority position, and still not) in normie contexts for a while.


You’re reading it wrong: rather, AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.

What you derogatorily call normies are the rest of the world caring about their business until one day some tech wiz came around to say “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.


> You’re reading it wrong

No, I don't think I am.

> AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.

There's a whole lot of that for quite a long time targeting normie contexts, too; in fact, the hate in normie contexts is directly responsive to it, because the hype in normie contexts is a lot of particularly clumsy grifting plus the nontechnical PR of the big AI vendors (which categories overlap quite a bit, especially in Sam Altman’s case), and the hate in normy contexts shows basically zero understanding of even what AI is beyond what could be gleaned from that hyper plus some critical pieces on broad (e.g., total water and energy use, RAM price) and localized (e.g., from fossil fuel power plants in poor neighborhoods directly tied to demand from data centers) economic and environmental impacts.

> What you derogatorily call normies

I am not using “normie” derogatorily, I am using it to contrast to tech contexts.


The most typical reactions I see outside of techie and arty spaces where people are most polarised about it are:

- annoyance at stupid AI features being pushed on them

- Playing around with them like a toy (especially image generation)

- Using them for work (usually writing tasks), to varying degrees of effectiveness to pretty helpful to actively harmful depending on how much of a clue they have in the first place.

Discussion or angst about the morality of training or threats to jobs doesn't really enter much into it. I think this apathy is also reflected in how this has not seemingly affected the sales of this game at all in the months that it has been reported on in the video game press. I also think this is informed by how most people using them can fairly plainly see they aren't really a complete replacement for what they actually do.


They don't call normies derogatorily, they just use it as proxy for "non-tech people"

> “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.

Nope, most are just annoyed from AI slop bombarding them at every corner, AI scams getting news of claiming another poor grandma, and AI tech industry making shit expensive. Most people's job are not in current direct threat of being employed, unless you work in tech or art.


Job replacement and AI slop are both legitimate reason that people have negative opinions on AI

Amongst many other legitimate reasons.


LLMs has had a couple of years by now to show their usefulness, and while hype can drive it for a while, it's now getting to the point where hype alone can't. It needs to provide a tangible result for people.

If that tangible result doesn't occur, then people will begin to criticize everything. Rightfully so.

I.e., the future of LLMs is now wobbly. That doesn't necessarily mean a phase shift in opinion, but wobbly is a prerequisite for a phase shift.

(Personal opinion at the moment: LLMs needs a couple of miracles in the same vein as the discovery/invention of transformers. Otherwise, they won't be able to break through the current fault-barrier which is too low at the moment for anything useful.)


It is fascinating. It's showing of course that AI has gone mainstream.

There was a time that I remember when you could gripe at a party about banner ads showing up on the internet and have a lot of blank stares. Or ask someone for their email address and get a quizzical look.

I pointed my dad to ChatGPT a few days ago and instructed him on how to upload/create an AI image. He was delighted to later show me his AI "American Gothic" version of a photo of him and his current wife. This was all new to him.

The pushback though I think is going to be short-lived in a way other push-backs were short-lived. (I remember the self-checkout kiosk in grocery stores were initially a hard sell as an example.)


How many American Gothic AI fake photos do you think he'll make. Sounds like a novelty experience to me. I also loved my first day in Apple's Vision Pro. It was mind blowing. On the 4th day I returned it. Novelty wears off, no matter how cool it might seem initially.


Oh, not disagreeing with you. A strange thing has happened inn the past when the what was novel also becomes the commonplace. Not in all cases, of course (and I personally also believe VR is one of those things that will never become commonplace).


People were told by other people to dislike LLMs and so they did, then told other people themselves.


Just like feminism when it was starting, back then millions of women believed it was silly for them to vote, and those who believed otherwise had to get loud to get more on their side, and that's one example, similar things have happened with hundreds other things that we now take for granted, so it's value as judgment measure it's very low by itself alone.


Ha! You’re actually exactly right.

We’ve observed this in AI gen ads (or “creatives” as ad people call them)

They work really well, EXCEPT if there is a comment option next to the ad - if people see others calling the art “AI crap” the click rate drops drastically :)


If I was vegan and found out after the fact that a meal that I enjoyed contained animal products in it that doesn't mean I'm some hypocrite for consuming it at the time. Whether I enjoyed it or not at the time it still breaches some ethical standard I have, abstaining from it from then on would be the expected outcome.


The same works the other way, and actually a lot better IMO.

Let's imagine a scenario with two identical restaurants with the exact same quality of food.

One sells their dish as a fully vegan option, but doesn't tell the customers.

Hardline "oorah, meat only for me" dude walks in and eats the dish, loves it.

If he goes to the other restaurant and is told beforehand that "sir, this dish is fully vegan" - do you think they'd enjoy it as much?

Prejudices steer people's opinions, a lot. Just like people stop enjoying movies and games due to some weird online witch-hunt that might later on turn out to be either a complete willful misunderstanding of the whole premise (Ghost in the Shell) or a targeted hate campaign (Marvels and many many other movies starring a prominent feminist woman).


Does that make it ethical to mislead people? You are stripping people of agency.


I think that's a hint that people already dislike AI ads on principle but it's good enough now to fool them, and the comment section provides transparency.


yes, having some transparency is terrible to PR


Just as they were told to like them in the first place. A lot of this is driven that way because most of the public only has a surface-level understanding of the issues.


That’s a bad faith argument using weasel words. Do not assume everyone who disagrees with you is an unthinking tool.

https://xkcd.com/610/

Look at how easy it is to make the argument in the other direction:

> People were told by large companies to like LLMs and so they did, then told other people themselves.

Those add nothing to the discussion. Treat others like human beings. Every other person on the planet has an inner life as rich as yours and the same ability to think for themselves (and inability to perceive their own bias) that you do.


LLM hate for use in art has been pretty mainstream from the start. The difference in criticism between use in code generation and use in art generation is palpable. I dont think anyone took kindly to the discourse of movie producers buying actor likeness rights and having perpetually young old actors for all future movies.

Programmers criticized the code output. Artists and art enjoyers criticized cutting out the artist.


It’s the usual “I don’t like it, I’m against, but it’s okay if I use it” thing. People understand the advantage it gives a person over another one, so they will still use it here and there. You’ll have some people who will be vehemently against it, but it will be the same as people who categorically against having smartphones, or avoiding using any Meta products because of tracking and etc.


It's because amount of AI slop bombarding people from every side increased and created knee-jerk reaction to anything AI, even if it is actually the "remove the boring part of work"


The issue with "removing the boring part of work" is that which part of the work is "boring" is subjective. There are going to be plenty of people that don't think that what they do is the "boring stuff that should be automated away." Whether this is genuine enjoyment for what they do or just an attempt to protect their career, both are valid feelings to have.


all news is prophesying that everyone is going to lose their jobs to "AI"

along with news about "AI" causing electricity bills to rise

every form of media is overrun and infested with poor quality slop

garbage products (microsoft copilot) forced on them and told by their bosses to use it, or else

gee I wonder why normal people hate it


The art bubble is generally considered more "normie" than the tech bubble and they've been strongly anti AI art for longer than even the introduction of the original GitHub copilot


Read the other comments in the thread lol- “Fuck artists, we will replace them”

This is not a winning PR move when most normal people are already pretty pro-artist and anti tech bro


It feels like a similar trend to the one that NFTs followed: huge initial hype, stoked up by tech bros and swallowed by a general public lacking a deep understanding, tempered over time as that public learns more of the problematic aspects that detractors publicise.


I think this comparison makes little sense, as in the case of AI there is some actual impactful substance backing the hype.


I don't feel NFTs ever really had much interest among the general public - average reaction just being "I don't get it, that sounds pointless".

Whereas AI seemed to have a pretty good run for around a decade, with lots of positive press around breakthroughs and genuine interest if you showed someone AI Dungeon, DALL-E 2, etc. before it split into polarized topic.


NFTs have way less downsides than LLMs and GenAI, since the main downside was just wasting electricity. I didn't have to worry about someone cloning my voice and begging my mom on the phone for money.


If you look at daytime TV in the UK, there are a lot of ads targeting the elderly talking about funeral cover and life assurance and so on.

I for one cannot wait for a future where grandparents get targeted ads showing their grandchildren, urging them to buy some product or service so their loved ones have something to remember them by...


Typical brigading, same with blm, woke, right wing, etc.


Wow you do mentally group things efficiently, that much I can say.


Anything wrong with the grouping? Or you don’t agree that most of those employ extreme brigading tactics to destroy opponents?


[flagged]


He made apparent analogy. You dont have to be so oversensitive that any mention of feminism and women blows into woke attack in your head.


If you have two modes of spending your time, one being work that you only do because you are paid for it, and the other being feeding into an addiction, the conversations you should be having are not about where to use AI.


I heard a very similar sentiment expressed as "everything is not good enough to be useful until it suddenly is".

I find it a powerful mental model precisely because it is not a statement of success rate or survival rate: Yes, a lot of ideas never break any kind of viability threshold, sure, but every idea that did also started out as laughable, toy-like, and generally shit (not just li-ion batteries, also the wheel, guns, the internet and mobile computers).

It is essentially saying 'current lack of viability is a bad indicator of future death' (at least not any more than the high mortality of new tech in general), I guess.


I was there earlier last year, and it was none too impressive.

Pretty shoddy brick walls (just straight blocks), crumbling at many places, constructed possibly along ancient foundations or maybe not, that you sort of walk through. Interesting things here and there. Couple of other tourists.

Walking through Saddam's palace next to it was much more fascinating; extreme grandeur morphed into a typical lost place with graffiti and empty bottles. The nearby town Al7illa certainly offered more to actually experience, like a mini theme park with the main attraction being (artificial) rain.

Anyways I genuinely wish the committed people all the best in the restoration, but I feel like the article is a tad over-enthusiastic and easily convinced.


This site and the nearby Saddam palace were featured in the recent Michael Palin travelogue on Iraq.

https://www.themichaelpalin.com/watch/#section8

The reconstruction looked very bare and empty in the program. But I guess it is a work in progress.

BTW the Assyrian exhibits in the British museum are amazing and well worth visiting. Yes, I know, colonialism is bad and they probably shouldn't be in London. But I doubt they would be in anything like as good a state if they had been left in their original locations.


> But I doubt they would be in anything like as good a state if they had been left in their original locations.

I get it, but thats problem with 'good theft', its still amoral, and well we all know history and how things actually happened. Inability to even properly acknowledge fuckups of one's ancestors leaves little room for moving further and learning hard from that, instead of some shallow blah to not stick out of the crowd.


> thats problem with 'good theft', its still amora

You may have meant immoral. But amoral puts it better.

From the perspective of common heritage, it’s better at the British Museum. From the perspective of ethnonational self determination, it should be returned to its origin even if that means its destruction.

I personally tend towards the latter for newer artefacts and the former for older ones. (The logic for the people living somewhere today having exclusive domain over something made millennia earlier falls apart if the present occupants may be barely more related to those forerunners than someone on another continent.)


But everything old was new at some point. If it can’t survive it’s youth then there is no real dilemma.


> everything old was new at some point

We live in the present. In the present, some things are old and others are new.


And an inability to acknowledge the ongoing fuckups of strangers(eg: see Taliban and ISIS destruction of archaeologically significant sites) that very well could have resulted in the same fate if they had been left in situ. None of us knows what would have actually happened, but at least standing out of the crowd earns some points.


I do agree that this is a complex topic but it is worth noting that the looting of Iraq was done by the British both times, making the argument a little circular.

Like good thing the British saved the relics from what might have happened from the aftermath of the invasion the British also engineered.


Not to mention that both Taliban and ISIS are Western-funded and supported terrorist groups.


Interesting idea. Does it include any adaptive learning/spaced repetition mechanism? (I don't have an ios device, so I can't test)


I think there really, really has to be a demo (at least screenshot mocks).

How will you collect the data (about me)? How much? How much more individual will this be compared to just subscribing to my city's newspaper (or a trade magazine, if we're talking job-related news)?

This may be just me, but I also think collecting emails here is a really weird first step — most news services provide at least a blurb of each article w/o any kind of signup, so this broke my expectation in a "what do you need my email for" kind of way.

Hope that helps.


Note that this says "best programmers" not "people best at having business impact by making software".

I wonder about this often: If you want to have impact/solve problems/make money, not just optimizing killing your JIRA tickets, should you invest a given hour into understanding the lowest code layer of framework X, or talk to people in the business domain? Read documentation or a book on accessibility in embedded systems? Pick up yet another tech stack or simply get faster at the one you have that is "good enough"?

Not easy to answer, but worth keeping in mind that there is more to programming than just programming.


> Note that this says "best programmers" not "people best at having business impact by making software".

We can look at a software developer as a craftsperson, and appreciate their skill and their craft, and we can look at them as a business asset, and those are two different things.

Both of which have their merits, but this article is clearly focused on the craftsperson side and that's enough. We don't need to make everything about business and money, and we definitely don't need to reduce the beauty and craft of writing code to Jira tickets.


I’m retired, these days, and spend the majority of every day, writing software.

I treat it as a craft, and do it for personal fulfillment and learning. I enjoy learning, and solving problems. I also enjoy creating stuff that I find aesthetically pleasing.

For example, I write iOS apps, and I’m working on a new version of a timer app that I’ve had in the App Store, for over a decade. I had added a Watch app to it, and had gotten to the point where it was ready for the App Store, but I kept having sync issues. It wasn’t a “showstopper,” but it was aesthetically not pleasing.

I determined that it was an issue that could be addressed by improving the fundamental design of the app, which had basically been constant for many years.

So I'm rewriting it completely.

That’s not something that makes “commercial” sense, but it’s what I want to do. I’ll also take the opportunity to redesign the basic UI.

I enjoy having that kind of freedom.

I also like to write about my work. I know that very few people are interested in reading it, but I do it, because it helps me to learn (the best way to learn, is to teach), and it helps me to focus my thoughts.


Thank you for sharing. I love hearing stories of these kinds of "software gardens". I wish you many years of joy tending yours.


Life goals! I'd love to do this if I ever retire.


Generally we aren't paid for our business expertise. In fact, most businesses actively resist giving developers deep domain responsibility.

This is manifest in management methodologies: developers are largely interchangeable cells in a spreadsheet. I'm not saying this is a good thing.

The reasons for this are complex, but generally, business people want us to solve the technical problems they can't handle themselves, they don't want us to "relieve" them of product management, customer relationships, and industry knowledge. Why would they? It would devalue them.


Yep. A developer with "business impact" might be seen as a liability.

One aspect might be that a developer who engages in "business" effectively stops being "subordinate". Management decisions need to be justified on a different level to maintain legitimacy.


This thread is kind of wild and something I've never heard anywhere in tech. Every place I've worked would consider a developer at least 5X more valuable if they actually had business or product sense, and could operate without the need for constant symbiosis with a "product guy". At one BigTech company we've all heard of, our division didn't even have product people. The engineering lead was expected to handle all of the product duties, and they wouldn't hire you if they didn't think you could at least grow into that role.

It's one of the reasons I went back for a business degree and then re-entered tech. No, of course nobody in Silicon Valley cares about the "MBA" title (HN sees it as a negative), but everywhere I've interviewed/worked they've appreciated that we could talk about the economic and business impact of the software, and not just the algorithms and data structures.


That sounds great, and I would advise you to value it. Most tech companies are not this forward thinking (often to their own detriment)


The result of that top-down management style is buggy code, blown deadlines, security holes, and the slowest software development I've ever seen.

I've found it possible to migrate to a less top-down Desert style just by finding executives who are frustrated by those problems and saying, "I have an idea I've seen help" and then getting the team together and saying, "hey, it turns out the executives would like us to write software well. What should we try first?"

Product has plenty of work remaining: they should be handling whatever subset of strategy, prioritization, analytics, BI, QA, facilitation, design and contracts that they have the skills for. But it requires engineers to actually collaborate with them as a peer, rather than engage in power struggles, and that requires everyone on the team to understand what we are building, for whom, and why.


> Generally we aren't paid for our business expertise. In fact, most businesses actively resist giving developers deep domain responsibility.

Chicken/egg imho.


Im somewhat puzzled as to why so many devs are insistent that being a good developer means you need to be a good PM.

These roles require wildly different skills and knowledge.

Usually the outcomes are better if you combine two people who are good at their jobs rather than hoping one person can do it all.


Good engineering skills are transferable to being a good PM: you need to break down problems, scope them to fit a particular time allotment, estimate an effect on user experience (stats, tracking, what is a good signal and what isn't) and have the agility to react to changing requirements as things are getting built.

Why it makes sense for them to be a single person? Often, "changing requirements" really comes from an engineer learning new things (this framework does not provide this, this external dep is going to be late, I'd need to learn 2 new things so will need more time...), and really, an engineer is the first one who'll know of some of the challenges and what's even feasible!

Now, the skills an engineer needs to develop to be a good PM is good communication and ability to document things at the right level, and lots of empathy for a customer and a business person (so they can "walk in their shoes"). Arguably, all things that will make a great engineer even better.

I've been in teams where we've had a very senior, experienced PM tell us that he's looking for another position in the company because our team does not need them: we already did the stuff they were hired to do. That was a sign of a great PM who did not try to actively wrestle control out of our hands when the team was chugging along just fine.


Breaking down problems (vertical slicing) isnt inherently a dev skill. Insofar as it is a transferable skill to break down problems it is more of a life skill.

Scoping tickets is more of a project management skill. Again, not a dev skill.

Estimating effect on user experience - requires empathy, again not a dev skill.

If you redefine the dev job as including PM skills then sure, PM skills are dev skills.

But theyre not.

>Why it makes sense for them to be a single person? Often, "changing requirements" really comes from an engineer learning new things

So? Happens to me too. I can tell the PM these things i learned. Thats a hell of a lot easier than managing all stakeholder interactions, empathizing and balancing their demands.

It only really makes sense to combine the two roles if the project is inherently very straightforward, a salary can be saved and the person doing both roles is suffiently qualified for both roles.


For any given ticket, unless you remove all creativity for an engineer, they will have to balance how deep and how wide they go, how much they refactor, and how much they skip, in order to be effective and not end up with a contraption that's extremely high risk to review and roll out to production: all of that is scoping.

If you are not doing that, you are being micromanaged and I feel for you in your engineering job.

And trust me, non-technical PMs are ill-equipped to figure out an incremental path to that North Star product or feature you want to ship — how you split branches and deliver value incrementally is something only a good engineer can do (well).

If you do not consider how an implementation will affect the user, you might just satisfy the requirement with an actually terrible experience (but the ticket never said it needs to load in under 30s and with no visible redraws and jumping elements): a good engineer will implicitly consider all of these, even if unspecified in a task (and many more, I only used an outrageous example to make a point).

Breaking down problems is certainly a life skill, but engineers are inherently good at it: it's the very definition of an engineer, and you can't be one without it. I have however seen PMs who mostly channel and aggregate customer experiences and stakeholder requests without an ability to consider (broken down, stepwise, incremental) paths to completion.

If you are good at all of these, you'd likely be a good engineer too: this does not mean that one can't be good at PM without being an engineer, just that a great engineer is very close to being a great PM too.

I am not against the division of labour and different motivations driving where each person invests their time, but if we are talking about a great either PM or engineer, they are pretty much of the same mindset with focus on different parts of the job that needs to be done — 90/10 split vs 10/90 split (and anything in between).

And finally, whether you are a great craftsman at engineering (or PMing), it is slightly different from a great engineer.


The latter are the ones that get promoted to senior staff+, or more likely become directors/VPs.

There is a very low cap on career growth if you are purely focused on programming.

So yes, if you want to climb the corporate ladder or run your own business, programming is a fraction of the skills required.

I think though it's okay to just focus on coding. It's fun and why many of us got into the industry. Not everyone likes the business side of things and that's okay.


I don't know. Career plans aside, to me, making software is a means to an end.

There is no inherent value to producing software, as there may be in producing car tires or bananas. The best software is no software.

And then who is the better programmer, the one who knows more about how to make software, or the one who knows more about what software to make?


Software is a craft.

There is an inherent value in programming, just like there is one in gardening, woodworking, producing art, or playing a musical instrument.

The value is in the joy that the activity brings. (Note that this tends to be a different kind of value than business value.)


To me, cars are a means to an end. And I can imagine a world without cars more easily than a world without software.

Do you imagine that we just somehow evolve capabilities beyond it? or do we eventually produce universally perfect software solutions and leave it at that?


It's not really about that.

If I hire you to make software for me, I don't really want software; I want a problem to go away, a money stream built, a client to be happy. Of course, that probably requires you to build software, unless you invent a magic wand. But if you had the magic wand, I'd choose it every single time over software.

Not so with food, furniture or a fancy hotels, where I actually want the thing.


If I had a magic wand to make you satiated, you wouldn't need food. If you're in it for the taste I will magic wand you some taste in your mouth. If I had a magic wand to give you a roof over your head, protection and a bed, you wouldn't need a hotel.

The magic wand argument doesn't make sense. Then you can also get everything else.


>The best software is no software.

Eh, I disagree. I like a lot of the software I'm using. There's inherent value to producing music with Ableton, cutting videos with Final Cut Pro, or just playing Super Mario for entertainment. Those are all more software than no software.


You could argue that GenAI music creation is "no software". You say what you want and it magically appears.


> worth keeping in mind that there is more to programming than just programming.

As a side note, this is what I keep pointing out when people talk about code generated by LLMs. As an activity, this is just one thing that programmers do.

I think the answer to your question (a good question indeed) is "both", or rather to balance development of both capabilities. The decision of how to spend time won't be a single decision but is repeated often through the years. The Staff+ engineers with whom I work _mostly_ excel at both aspects, with a small handful being technical specialists. I haven't encountered any who have deep domain knowledge but limited technical depth.

(edit: formatting)


Not to be snarky, but my definition of best programmers would balance these. I do spend more time than most understanding the depths of tech stacks and the breadth of potentially useful ones. At the same time I strive to make software that gets the job done with only the abstractions that pay off in a short to medium timeframe.

The trap avoid are those business impact folks that demonstrate an unwillingness to get better at actual programming, which ironically would increase their impact.

Edit: an example is fixing a problem without understanding its cause.


> should you invest a given hour into understanding the lowest code layer of framework X, or talk to people in the business domain?

I think talking to people in business domain is the most important thing you can do in SWE or IT in general. The business is the entire reason you write every line of code, the more you understand, the better you will be at your job.

I do find drilling down into lower layers of your software stack helpful, and can make you a better programmer, but in a much more specific way.

> Pick up yet another tech stack or simply get faster at the one you have that is "good enough"?

Both of these are programming skills and less important, IMO. Trends and technologies come and go; if they're useful/sticky enough, you'll end up having to learn them in the course of your job anyway. Tech that's so good/sticky it sticks around (e.g. react) you'll naturally end up working with a lot and will learn it as you go.

It's definitely good to have a solid understanding of the core of things though. So for react, really make sure you understand how useState, useEffect work inside and out. For Java it'll be other things.


> The business is the entire reason you write every line of code

It's actually not the entire reason i write or have written every line of code.

It may be surprising to some people on this website for entrepreneurs but there are in fact people who enjoy writing code for the sake of it.


Ikigai applies to writing code as well.


What's most interesting to me about your point compared to parent comment's is that you're saying "statically, over all time, the most valuable thing to do among all your choices is to understand the business," whereas the parent is saying "dynamically, in this moment, what is most valuable thing to do in this iteration of my actions?"

I think your question is most interesting in terms of long term skill mix or "skill portfolio" a.k.a the career viewpoint, while the parent's is more interesting on a day-to-day basis as you navigate the state space of bringing a project to completion. On a given day, understanding the business may not be the most valuable thing to do, but to your point over the course of a job or career it probably is.

(For example, I can say that I already have sufficient business context to do my programming task for tomorrow. Asking more questions about the business would be wasteful: I need to go update the batch job to achieve the business outcome.)

EDIT: I might go one step further and say the most valuable skill is not understanding the business but understanding how to match and adapt technologies to the business (assuming you want a career as a programmer). Ultimately the business drives income, but presumably you have a job because their business requires technology. So the most valuable skill is, as efficiently as possible, making the technology do what the business needs. That's more of a balance / fit between the two than just "understanding the business."


I am really interested to read articles about "people best at having business impact by making software" , so far I only discovered resources like kalzumeus (patio11), indiehackers, microconf, business of software forum (very old).


Read everything DHH wrote, he has lots of insights on this. Scroll down to his books: https://dhh.dk

I found Lean Startup to be very good too.


You can be the “best” programmer at home/internet, and just another employee at work.


Interesting idea!

I think this could really benefit from lazy signup, a demo directly on the landing page, or at least way more screenshots.

Given that making a checklist in my notebook or established organization-app of choice is almost 0 friction, signing up or downloading an app feels like asking a lot without seeing the clear benefit 100%.

Good luck with this :)


Hey, thanks for the reply!

We already have lazy sign up, but I'll definitely consider adding a demo and/or more screenshots.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: