Hacker Newsnew | past | comments | ask | show | jobs | submit | joelreymont's commentslogin

Debugging Zig in LLDB is painful.

Slices show up as { ptr, len }, optionals are unreadable, and slice[0] just errors out. Python formatters help a bit but don’t fix expressions. zig-lldb fixes everything… if you’re willing to rebuild LLDB and maintain a fork.

zdb is a native LLDB plugin that sits in the middle:

- Works with stock LLDB (Homebrew / system) - Native C++ type summaries (no Python) - Zig-style expressions work: slice[0], opt.?, err catch x

(lldb) p int_slice[0] (int) $0 = 1

How? By calling LLDB’s internal APIs via versioned offset tables and rewriting expressions on the fly.

Limitations: no Variables View expansion (ABI issues with std::function). CLI debugging works great.

Check out the Github repo at https://github.com/joelreymont/zdb!


Moving away from Github may end up killing the project.


They will spend less time on PRs from LLM spammers like you (for anyone who wonders, just Google his username and check the PRs made to OCaml/Zig/Julia), so if anything they freed up resources.


It was a bug in my own modes to the Zig compiler not in the stock compiler itself.


This is an excerpt from the session where AI is writing my Lisp compiler. What do you call this? I call this doing what a competent developer does!

39/40 tests pass. The native reader works for integers, hexadecimal, lists, strings and quote forms. The one failure is symbol comparison (known limitation).

  Based on the context summary and the user's note about cleanup, I should:
  1. Continue Phase 3.5 self-compilation
  2. Clean up the repo structure

  Let me first update the todo list and then investigate the SIGKILL issue more
  thoroughly. The issue is that combining reader source with file I/O code causes
   the executable to be killed. Let me check if buffer-to-string with reader
  works:

 Let me test specifically reader + file I/O combined (which is what the
  self-hosting test needs):


Sounds to me like someone roleplaying being a developer. Never in my career have I seen someone think/reason/act like this.


> Sounds to me like someone roleplaying being a developer.

That's because that's precisely how LLMs work. They complete a text where two actors (the "user" and the "assistant"), or sometimes three actors (the "user", the "assistant", and the "tools"), are engaging in a conversation (in this case, about software development). It's like a theatre script.


It's not. It's absolutely justified for the OCaml maintainers to reject this PR.

I feel completely different about my Zig PR [1] but, hey, it's not my playground and the Zig folks seem to be particularly opinionated.

[1] https://ziggit.dev/t/bug-wrong-segment-ordering-for-macos-us...


It is truly unfortunate for you that your actions have probably tainted your future open source contributions for a long time.


Do you have no shame, man?


I had $1000 in Claude credits to spend for the greater good.


Personally I would have those credits to generate hentai but to each his own i suppose.

In the post where you had it respond to accusations of plagiarism and it responded by posting snippets of code which were obviously plagiarized and confidently asserted that they were not, what was your prompt? I ask because I felt its response was oddly tone-deaf even by LLM standards. I'm guessing that instead of giving it a neutral prompt such as "respond to this comment" you gave it something more specific such as "defend yourself against these accusations"?

I'm used to seeing them contradict themselves and say things that are obviously not true but usually when confronted they will give in and admit their mistake rather than dig a deeper hole.


You didn't


It wouldn't do much.

I find that ChatGPT 5.1 was much better at reviewing this code than writing it so I had it review Claude's output until the review was clean.

This is in addition to making sure existing and newly generated compiler tests pass and that the output in the PR / blog post is generated by actually running lldb through its paces.

I did have a "Oh, shit!" moment after I posted a nice set of examples and discovered that the AI made them up. At least it honestly told me so!


LLM will guiltlessly produce hallucinated 'review', because LLMs does NOT 'understand' what it is writing.

LLMs will merely regurgitate a chain of words -- tokens -- that best match its Hidden Markov Model chains. It's all just a probabilistic game, with zero actual understanding.

LLMs are even known to hide or fake Unit Test results: Claiming success when it fails, or not skipping the results completely. Why? Because based on the patterns it has seen, the most likely word that follow "the results of tests" are the words "all successful". Why? Because it tries to reproduce other PRs it has seen, PRs where the PR author actually performed tests on their own systems first, iterating multiple times until the tests succeed, so the PRs that the public sees are almost invariably PRs with the declaration that "all tests pass".

I'm quite certain that LLMs never actually tried to compile the code, much less run Test Cases against them. Simply because there is no such ability provided in their back-ends.

All LLMs can do is "generate the most probabilistically plausible text". In essence, a Glorified AutoComplete.

I personally won't touch code generated wholly by an AutoComplete with a 10-foot pole.


It's quite complicated in this case.

The Jane St (OxCaml) DWARF implementation is also tightly coupled with the compiler.


I don't always use OCaml (meme coming in 1...2...3) and maintaining a fork is a significant undertaking.

More importantly, being able to debug native OCaml binaries and actually see source code, values of variables, etc. is something that's useful to everyone.

Looking at assembler instead of source code sucks unless you are reverse-engineering.


Why are you submitting a PR if you do not use the software? You could just as easily donate the money that you spent producing 13k LOC to the project and they would spend it to use Claud on things that need to be fixed or just pay themselves to fix things manually.

This way there were hours, kwh, and dollars wasted on something that will be of no use to anyone.


With all due respect, try to read things before opining on them.

The PR explains why I did the work.


I challenge you to actually demonstrate that the code was taken instead of generated or derived. Otherwise, you are just shooting your mouth off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: