Hacker Newsnew | past | comments | ask | show | jobs | submit | dps's commentslogin

Yeah, I really should have included <> :-)

Fun to see this post from the deep archive get some interest - thanks for reading!


Author here… I wrote it, I used Claude for proofreading/editing as mentioned at the end. Anyway point is, real human here!

I do still read the code _except_ when I am consciously vibe coding a non production thing where I will know empirically that it worked or not by using it.

I’m definitely not using agents to do all my coding (as I hope is reasonably) clear from the post. But they have crossed this line from pointless to try to genuinely useful for many real world problems in just the last couple of months in my experience.


Author of the piece here :-). We are not building coding agents and focused on quite different stuff… I am just trying to share my personal experience as a software person!


Absolutely — but I also think there’s a strong resistance to managers saying “AI is good, really”.

The experience of long-term software engineers (e.g. antirez) who don’t have a horse in the AI race tends to line up much better with my own.

Also really like this one: https://diwank.space/field-notes-from-shipping-real-code-wit...


~~Counter~~ add to that - Armin Ronacher[0] (Flask, Sentry et al.), Charlie Marsh[1] (ruff, uv) and Jarred Sumner[2] (Bun) amongst others are tweeting extensively about their positive experiences with llm driven development.

My experience matches theirs - Claude Code is absolutely phenomenal, as is the Cursor tab completion model and the new memory feature.

[0] https://x.com/mitsuhiko

[1] https://x.com/charliermarsh

[2] https://x.com/jarredsumner


Not a counter — antirez is posting positive things too.

Charlie Marsh seems to have much better luck writing Rust with Claude than I have. Claude has been great for TypeScript changes and build scripts but lousy when it comes to stuff like Rust borrowing


Apologies I misread! Updated.

I'll add - they do seem to do better with Go and Typescript (particularly Next and React) and are somewhat good with Python (although you need a clean project structure with nothing magic in it).


This one seems really sloppy and confused; he describes three "modes of vibe coding" that involve looking at the code and therefore aren't vibe coding at all, as the definition he quoted immediately previously from Karpathy makes clear. Maybe he's writing his code by hand and letting Claude write his blog posts.


Not OP, and I don't have specific stake in any AI companies, but IMHO (as someone doing web-related things for a living (as a developer, team lead, "architect", product manager, consultant, and manager) since 1998, I think we pretty much all of us have skin in the game, whether or not we back a particular horse.


Really depends what you believe.

If you believe that agents will replace software developers like me in the near term, then you’d think I have a horse in this race.

But I don’t believe that.

My company pays for Cursor and so do I, and I’m using it with all the latest models. For my main job, writing code in a vast codebase with internal frameworks everywhere, it’s reasonably useless.

For much smaller codebases it’s much better, and it’s excellent for greenfield work.

But greenfield work isn’t where most of the money and time is spent.

There’s an assumption the tools will get much better. There are several ways they could be better (e.g. plugging into typecheckers to enable global reasoning about a codebase) but even then they’re not in replacement territory.

I listen to people like Yann LeCun and Demis Hassabis who believe further as-yet-unknown innovations are needed before we can escape a local maxima that we have with LLMs.


You need to use better coding agents and workflows.


Long-term software engineers do have an anti-horse in the AI race - a lot of us eventually could be replaced by a coding agent.


Most of us have been replaced by Microsoft Excel already though. Or by a compiler.


very true - for many tasks excel is enough, better, and faster


Long term software engineers very much have a horse in the AI race. It threatens their jobs and importance.


Took me a while to find this [1]: "We’re building the next-gen operating system for AI agents."

--

1: https://sdsa.ai/


Always super helpful to post more guidelines on how to use LLMs more effectively!


Author of the post here… Cool to see this back on HN! I was trying to provide instructions that anyone could use regardless of platform, hence the choice of web tools (both those linked process the data locally). If you know of a base32 decoder that’s easily available on Windows, Mac and Linux I’d be delighted to update the post.


WSl means Unix command line tools are available on windows as well these days.


OpenSSL, base32, basez, C program, Python or Lua script? I have a Lua script that generates TOTP (with base32 decoding), for example. What are your requirements, would either of these suffice?


admin/admin worked for me


Thanks! working now


>This year’s Advent of Code has been brutal (compare the stats of 2023 with that of 2022, especially day 1 part 1 vs. day 1 part 2).

I enjoyed completing AoC this year. While it was very clear that day 1 (esp. part 2) was significantly harder than previous years (I wrote about this among other things [0]), OP's claim seemed not obviously self evident when comparing _current_ 2022 stats to _current_ 2023 stats, as folks have had an additional full year to complete the 2022 puzzles.

I grabbed the 2022 stats from Jan 14th 2023 [1] and, indeed, the difference is quite stark. Graphing the part two completion stats[2] for both years, there was a relatively similar starting cohort size on day 1, but 2023 looks clearly harder than 2022 up until day 15. As OP observes, the ratio[3] of folks completing pt1 but not going on to complete pt 2 is way higher for a lot of days in 2023 and suggests the day 5, 10, 12 and especially day 22 part 2s were particularly difficult.

[0] https://blog.singleton.io/posts/2024-01-02-advent-of-code-20...

[1] https://web.archive.org/web/20230114172513/https://adventofc...

[2] https://blog.singleton.io/static/imgs-aoc23/completion.png

[3] https://blog.singleton.io/static/imgs-aoc23/ratios.png


Early AoC was fun, you could get away without anything fancy until late in the game. Then it got harder, not fun, so I gave up and stopped touching it.


I didn't get very far into AoC this year as I ran out of time. Maybe I'll pick it up again later.

But my point is, I was surprised at how hard day 5, part 2 was. I didn't give up and solved it, but went away wondering whey I'd missed something obvious and overcomplicated it. So it brings some relief to know it was 'supposed" to be a bit challenging!


This was just my personal experience (which certainly came from trying out a different language than I typically use in my day to day), but I'd argue that day 1 part 2 wasn't _hard_, but improperly specified from the prompt. The examples given are:

  two1nine
  eightwothree
  abcone2threexyz
  xtwone3four
  4nineeightseven2
  zoneight234
  7pqrstsixteen
There is one critical example missing from this set and you can't exactly just figure out how you're meant to substitute the values without an example like:

  oneight


Thanks for the details. To add to this discussion, I have a script to see the progression over the days.

Looking at the last two columns, you can see how brutal 2023 was compared to 2022. Especially in the beginning. The first few days, most people keep playing, with a retention higher than 80% most days, and virtually everyone people solve both parts. In contrast, only 76% of people solved part 2 after solving part 1. And many people gave up on days 3 and 5.

Interestingly, the last few days are not that much lower. And that can be explained by the fact that AoC 2023 is more recent than AoC 2022, like you said. My interpretation is that this group of people will get over all the challenges regardless of the difficulty (to an extent, of course), while many other people will give up when they realize it will take too much of their time.

    Stats for year 2022 of Advent of Code
    -------------------------------------
    
    Day   Both puzzles   One puzzle       Total   Rel. puzzle 1/2   Rel. day before
      1        280,838       15,047     295,885              95 %             100 %
      2        232,752       12,403     245,155              95 %              83 %
      3        200,016       11,392     211,408              95 %              86 %
      4        184,435        3,734     188,169              98 %              92 %
      5        157,392        3,116     160,508              98 %              85 %
      6        155,921        1,602     157,523              99 %              99 %
      7        113,241        2,592     115,833              98 %              73 %
      8        107,224        7,659     114,883              93 %              95 %
      9         82,414       11,449      93,863              88 %              77 %
     10         85,075        5,511      90,586              94 %             103 %
     11         68,838        9,258      78,096              88 %              81 %
     12         59,253        1,061      60,314              98 %              86 %
     13         51,512        1,220      52,732              98 %              87 %
     14         49,051          991      50,042              98 %              95 %
     15         39,677        5,773      45,450              87 %              81 %
     16         23,298        5,650      28,948              80 %              59 %
     17         21,525        6,237      27,762              78 %              92 %
     18         25,420        4,927      30,347              84 %             118 %
     19         17,516          928      18,444              95 %              69 %
     20         22,141        1,003      23,144              96 %             126 %
     21         23,022        3,060      26,082              88 %             104 %
     22         15,393        5,083      20,476              75 %              67 %
     23         18,531          254      18,785              99 %             120 %
     24         16,419          252      16,671              98 %              89 %
     25         13,192        7,473      20,665              64 %              80 %
    ~/src/advent-of-code% ./stats.py 2023
    Stats for year 2023 of Advent of Code
    -------------------------------------
    
    Day   Both puzzles   One puzzle       Total   Rel. puzzle 1/2   Rel. day before
      1        230,737       73,941     304,678              76 %             100 %
      2        196,352        9,256     205,608              95 %              85 %
      3        130,406       19,913     150,319              87 %              66 %
      4        130,271       17,691     147,962              88 %             100 %
      5         80,255       31,029     111,284              72 %              62 %
      6        103,358        1,918     105,276              98 %             129 %
      7         81,905        7,308      89,213              92 %              79 %
      8         74,034       14,707      88,741              83 %              90 %
      9         76,438        1,229      77,667              98 %             103 %
     10         48,313       17,054      65,367              74 %              63 %
     11         57,339        2,386      59,725              96 %             119 %
     12         30,985       14,440      45,425              68 %              54 %
     13         38,217        5,223      43,440              88 %             123 %
     14         36,500        7,457      43,957              83 %              96 %
     15         40,881        4,156      45,037              91 %             112 %
     16         35,347        1,023      36,370              97 %              86 %
     17         24,014        1,097      25,111              96 %              68 %
     18         24,799        4,937      29,736              83 %             103 %
     19         22,525        7,197      29,722              76 %              91 %
     20         18,287        4,398      22,685              81 %              81 %
     21         14,311       10,149      24,460              59 %              78 %
     22         15,830          988      16,818              94 %             111 %
     23         14,562        2,964      17,526              83 %              92 %
     24         11,864        4,918      16,782              71 %              81 %
     25         10,522        3,048      13,570              78 %              89 %


I've been making wine at home in California for the past few years. Finding grapes on Craigslist/via friends, picking them myself and fermenting/storing/bottling wine in my basement. It's great fun, and tastes pretty good too! I wrote up a guide for anyone who'd like to try here - https://wine.singleton.io/


I added an 'almost' caveat to the text - thanks for the note.


Hi folks, article author here. I just wanted to stop by to say I'm delighted so many folks enjoyed the piece. I really did have a lot of fun :-)


Great article! Is there any chance of bringing Rust into your place of work? :)


(Stripe CTO here)

That's a reasonable question. We wrote this RCA to help our users understand what had happened and to help inform their own response efforts. Because a large absolute number of requests with stateful consequences (including e.g. moving money IRL) succeeded during the event, we wanted to avoid customers believing that retrying all requests would be necessarily safe. For example, users (if they don’t use idempotency keys in our API) who simply decided to re-charge all orders in their database during the event might inadvertently double charge some of their customers. We hear you on the transparency point, though, and will likely describe events of similar magnitude as an "outage" in the future - thank you for the feedback.


And thank you for the answer and for being open to outsider input.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: