Hacker Newsnew | past | comments | ask | show | jobs | submit | pyrolistical's commentslogin

This article glosses over the hardest bit and bike sheds too much over keys.

> Critically, these two things must happen atomically, typically by wrapping them in a database transaction. Either the message gets processed and its idempotency key gets persisted. Or, the transaction gets rolled back and no changes are applied at all.

How do you do that when the processing isn’t persisted to the same database? IE. what if the side effect is outside the transaction?

You can’t atomically rollback the transaction and external side effects.

If you could use a distributed database transaction already, then you don’t need idempotent keys at all. The transaction itself is the guarantee


The external side-effects also need to support idempotency keys, which you propagate. Then you use something like a message queue to drive the process to completion.

Exactly that.

i get what you are saying, but i don't think it's fair to call it bike shedding, getting the keys right is also important, one can easily screw up that part too

I'm not sure if TFA implies this (it uses too much of his personal jargon for me to understand everything, and it's Friday) but consider this solution based on his transaction log section: you should use the same database that persists the idempotency key to persist the message, and then consume the messages from the CDC/outbox-style. Meaning, the database simply acts as an intermediate machine that dedupes the flow of messages. Assuming you're allowed to make the producer wait.

The practical answer is you use a combination of queries and compensating actions to resemble idempotency with the external service. Some people additionally constrain things to be a linear sequence of actions/effects, and call this pattern Sagas. It's sort of a bastardized distributed transaction that lets you handle a lot of real world use cases without getting into the complexity of true distributed transactions.

If you need "transactions" with microservices the traditional answer is sagas - eg multiple transaction boundaries that dont fully commit until their entire set of things is complete by passing a message or event to the next system, and having the ability to rollback each thing in a positive manner, either by appending new correct state again or "not ever adding" the original state.

The problem with sagas is that they only guarantee eventual consistency, which is not always acceptable.

There is also 2 phase commit, which is not without downsides either.

All in all, I think the author made a wrong point that exact-once-processing is somehow easier to solve than exact-once-delivery, while in fact it’s exactly same problem just shaped differently. IDs here are secondary.


I'd agree with that - two phase commit has a bunch of nasty failure cases as well, so there's no free lunch no matter what you do when you go distributed. So just ... don't, unless you really really have to.

Tell the manager you are assuming their authority.

You will force the answer out of them either way


This is really how it’s done. Don’t go over their head; go under their boss’s.

I'm not sure you understand just how dedicated they were to this whole turn it back on the employee method they were apparently using. I just skimmed the summary on the book after reading this comment. Its a toxic method of avoiding accountability masquerading as "mgmt"

You can't do whatever you describe if it needs sign off by a manager that simply ghosts on everything you try to send their way. You can assume responsibility but I'm not committing fraud for a company that does not care.

Also what you said means that you are now officially responsible for the mistakes which I'm pretty sure is what this whole "method" is about.


If they are not signing things required for your job then you need to go over their head

Ditto. It’s is hard to find non-wifi motherboards with toslink.

All the cheap boards have neither. Most of the high end boards have both


I updated my computer this year, and didn't find anything without wifi either. So it seems it's a small tax you just have to pay now.

I've at least found that the wifi+Bluetooth chips seem to be significantly more robust than the standalone bt ones.

I don't foresee any Bluetooth need either for my desktop setup. But yeah I do see that many buyers would want that for headphones if nothing else, so it makes sense to include the chip.

Compressed JSON is good enough and requires less human communication initially.

Sure it will blow up in your face when a field goes missing or value changes type.

People who advocate paying the higher cost ahead of time to perfectly type the entire data structure AND propose a process to do perform version updates to sync client/server are going to lose most of the time.

The zero cost of starting with JSON is too compelling even if it has a higher total cost due to production bugs later on.

When judging which alternative will succeed, lower perceived human cost beats lower machine cost every time.

This is why JSON is never going away, until it gets replaced with something with even lower human communication cost.


> When judging which alternative will succeed, lower perceived human cost beats lower machine cost every time.

Yup this is it. No architect considers using protos unless there is an explicit need for it. And the explicit need is most times using gRPC.

Unless the alternative allows for zero cost startup and debugging by just doing `console.log()`, they won't replace JSON any time soon.

Edit: Just for context, I'm not the author. I found the article interesting and wanted to share.


Print debugging is fine and all but I find that it pays massive dividends to learn how to use a debugger and actually inspect the values in scope rather than guessing which are worth printing. It also is useless when you need to debug a currently running system and can't change the code.

And since you need to translate it anyway, there's not much benefit in my mind to using something like msgpack which is more compact and self describing, you just need a decoder to convert to json when you display it.


> rather than guessing

I'm not guessing. I'm using my knowledge of the program and the error together to decide what to print. I never find the process laborious and I almost always get the right set of variables in the first debug run.

The only time I use a debugger is when working on someone else's code.


That's just an educated guess. You can also do it with a debugger.

The debugger is fine, but it's not the key to unlock some secret skill level that you make it out to be. https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/

I didn't say it's some arcane skill, just that it's a useful one. I would also agree that _reading the code_ to find a bug is the most useful debugging tool. Debuggers are second. Print debugging third.

And that lines up with some of the appeals to authority there that are good, and that are bad (edited to be less toxic)


Even though I'm using the second person, I actually don't care at all to convince you particularly. You sound pretty set in your ways and that's perfectly fine. But there are other readers on HN who are already pretty efficient at log debugging or are developing the required analytical skills and I wanted to debunk the unsubstantiated and possibly misleading claims in your comments of some superiority in using a debugger for those people.

The logger vs debugger debate is decades old, with no argument suggesting that the latter is a clear winner, on the contrary. An earlier comment explained the log debugging process: carefully thinking about the code and well chosen spots to log the data structure under analysis. The link I posted was to confirms it as a valid methodology. Overall code analysis is the general debugging skill you want to sharpen. If you have it and decide to work with a debugger, it will look like log debugging, which is why many skilled programmers may choose to revert to just logging after a while. Usage of a debugger then tends to be focused on situations when the code itself is escaping you (e.g. bad code, intricate code, foreign code, etc).

If you're working on your own software and feel that you often need a debugger, maybe your analytical skills are atrophying and you should work on thinking more carefully about the code.


Debuggers are great when you can use them. Where I work (financial/insurance) we are not allowed to debug on production servers. I would guess that's true in a lot of high security environments.

So the skill of knowing how to "println" debug is still very useful.


Also: debugging

You (a human) can just open a JSON request or response and read what's in it.

With protobuf you need to build or use tooling that can see what's going on.


It is only "human readable" since our tooling is so bad and the lowest common denominator tooling we have can dump out a sequence of bytes as ascii/utf-8 text somewhat reliably.

One can imagine a world where the lowest common denominator format being a richer structured binary format where every system has tooling to work with it out of the box and that would be considered human readable.


> People who advocate paying the higher cost ahead of time to perfectly type the entire data structure AND propose a process to do perform version updates to sync client/server are going to lose most of the time.

that's true. But people also rather argue about security vulnerabilities than getting it right from the get-go. Why spend an extra 15 mins effort during design when you can spend 3 months revisiting the ensuing problem later.


Alternatively: why spend an extra 15 mins on protobuf every other day, when you can put off the 3-month JSON-revisiting project forever?

I use ConnectRPC (proto). I definitely do not spend any extra time. In fact the generated types for my backend and frontend saves me time.

It won't go away in the same way COBOL won't. That does not mean we should be using it everywhere for greenfield projects.

I've gone the all-JSON route many times, and pretty soon it starts getting annoying enough that I lament not using protos. I'm actually against static types in languages, but the API is one place they really matter (the other is the DB). Google made some unforced mistakes on proto usability/popularity though.

why are you against static types in languages?

I once converted a fairly large JS codebase to TS and I found about 200 mismatching names/properties all over the place. Tons of properties we had nulls suddenly started getting values.


Sounds like this introduced behavior changes. How did you evaluate if the new behavior was desirable or not? I’ve definitely run into cases where the missing fields were load bearing in ways the types would not suggest, so I never take it for granted that type error in prod code = bug

The most terrifying systems to maintain are the ones that work accidentally. If what you describe is actually desired behavior, I hope you have good tests! For my part, I’ll take types that prevent load-bearing absences from arising in the first place, because that sounds like a nightmare.

Although, an esoteric language defined in terms of negative space might be interesting. A completely empty source file implements “hello world” because you didn’t write a main function. All integers are incremented for every statement that doesn’t include them. Your only variables are the ones you don’t declare. That kind of thing.


it was desirable because our reason for the conversion was subtle bugs all over the place where data was disappearing.

Makes sense. That sounds like a good reason to do it. Unfortulately I've also seen people try to add typescript or various linters without adequate respect for the danger associated with changing code that seems to be working but looks like a bug especially when it requires manual testing to verify.

It costs time, distracts some devs, and adds complexity for negligible safety improvement. Especially if/when the types end up being used everywhere because managers like that metric. I get using types if you have no tests, but you really need tests either way. I've done the opposite migration before, TS to JS.

Oh I forgot to qualify that I'm only talking about high level code, not things that you'd use C or Rust for. But part of the reason those langs have static types is they need to know sizes on stack at compile time.


I love scala case classes and pattern matching. Too bad the compiler sucked (too slow) and it had some rather large footguns like implicits

For something this short that is pure math, why not just hand write asm for the most popular platforms? Prevents compiler from deoptimizing in the future.

Have a fallback with this algorithm for all other platforms.


This pretty much is assembly written as C++... there's not much the compiler can ruin.

Because that isn’t portable?

Imagine being born and told your life has been determined by some other humans living a comfortable life with unlimited air and water.

You are told you are to about make the great achievement humankind has ever made but all you want is a little bit more food and to take a shower.


Sometime people are born for greatness.

Isn't that already all of us?

Wow, you are really enjoying life. Hope it gets better.

It’s 10x slower than vanilla which makes this an ideal use case for transpilation.

I bet you could just take one afternoon to write a vite plugin


What is the use case of this library given that vanilla JS is 10x faster?

Maybe it's ease of development, and resulting readability?

I did enjoy the example code, compared to the native javascript (both shown in the article):

  var draw = SVG().addTo('#drawing')
    , rect = draw.rect(100, 100).fill('#f06')

why would the native JavaScript not be something like (probably errors here, so like, not necessarily this precisely)

const div = document.getElementById('drawing');

div.innerHTML = `<svg width="100%" height="100%"><rect width="100" height="100" fill="#f06"/></svg>`;

obviously if the what is going in can have user input in some way then open to attack using innerHTML but otherwise it seems like the structure of the example native JavaScript is made in such a way as to make the SVG.js version seem super cool in comparison.


> Obviously not as fast as vanilla js

I had a similar question- why is it obviously not as fast as vanilla js?


Because it’s written in vanilla JS.

Perhaps non-browser usage?

Exception is hidden control flow, where as error values are not.

That is the main reason why zig doesn’t have exceptions.


Correction: unchecked exceptions are hidden control flow. Checked exceptions are quite visible, and I think that more languages should use them as a result.


I'd categorize them more as "event handlers" than "hidden". You can't know where the execution will go at a lower level, but that's the entire point: you don't care. You put the handlers at the points where you care.


It’s just both using c abi right?


Yeah, this isn't quite C++ interop on its own. It's C++ interop via C, which is an incredibly pertinent qualifier. Since we go through C, opaque pointers are needed for everything, we can't stack allocate C++ values, we need to write extern C wrappers for everything we want to do (like calling member fns), and we don't get any compile-time type/safety checking, due to the opaque pointers.

Direct C++ interop is doable, by embedding Clang into Zig and using its AST, but this is significantly more work and it needs to be done in the Zig compiler. As a Zig user, going through C is about as good as you can do, probably.


It's a bit more than your typical "interop via C". With a "sized opaque" type you actually can stack allocate C++ values in Zig (and vice versa stack allocate Zig values in C++), i.e.

fn stackExample() void {

    var some_cpp_type: c.SomeCppType = undefined;
    c.some_cpp_type_ctor(&some_cpp_type);
    defer c.some_cpp_type_dtor(&some_cpp_type);

    // ...

}


Seems like it. And the sizes are all hard-coded, which means you are probably wedded very tightly to a particular C++ compiler.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: