Hacker Newsnew | past | comments | ask | show | jobs | submit | AWebOfBrown's commentslogin


It doesn't, but from my perspective the thinking behind zero trust is partly to stop treating networking as a layer of security. Which makes sense to me - the larger the network grows, the harder to know all its entry-points and the transitive reach of those.


Another interpretation of this is that the lead developer adequately mitigated the risk of errors while also managing the risk of not shipping fast enough. It's very easy to criticise when you're not the one answering for both, especially the latter.


I really wanted to adopt tRPC but the deal breaker was it being opinionated on status codes without allowing configurability. Because I needed to meet an existing API spec, that meant ts-rest was a better option. I think there's an aditional option with a native spec generator in frameworks like Hono, and maybe Elysia.


I think that's demonstrably false.

His point about the runtime complexity of an API being entirely distinct from how the interface to it's code is exposed (whether GraphQL or REST or otherwise) is fairly obvious, I think.

The counter-argument is that unlimited query complexity makes it far bigger problem, and the author's point is that if you're using it for private APIs with persisted queries, you shouldn't have that problem unknowingly.

Don't get me wrong - I think the takeaway is that GraphQL's niche is quite small, and he's defending exactly that niche. It's not often the case that you can develop an API in a private manner which doesn't undercut higher-order value in the future, as the rise of AWS hopefully made evident.


Everything I needed to know about Russell's performance war was answered when, whilst he was working at Google, folk started asking him why he was naming and shaming companies for poor performance when his exact critiques were swiftly applied to Google's apps (calendar, maps, gmail). I wish I could find the twitter thread from back then, but the gist of his response was that what Google was doing was incredibly complicated, far more than anything the targets of his ire were working on, and as such it was reasonable not to have fixed those issues.

He wasn't wrong in his assessment of complexity, but the fact he refused to acknowledge the business priorities were the same between Google and companies he called out, absolutely baffled me. The gist from my perspective was that companies external to his own should bend over backwards for performance, while his should not, because his personal goals were tied to improving the performance of the web. Hopefully that's an over-simplification and I've missed something, but that's what I can recall.


“it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair.


Ah yes, the classic "our problems are the hardest" perspective.

These are usually the narratives we tell ourselves to let ourselves off the hook.


Don't forget that he's also shaming everyone for complexity while his own work brings untold complexity to the web platform through dozens of Javascript-only standards around Web Components.


> Folks seems to lack a kind of basic economic perspective.

I agree with most of what you said, but this seems a bit ironic.

Your suggestions almost exclusively involve large investments of time with little established proof that they are efficient. Do you genuinely believe reading RFC 2616 "cover to cover" is an efficient way of solving the specific problems they came across?

I would wager most developers wishing to be "really good" actually have a concrete desire like a greater paycheque or employer in mind. If that is true, I doubt reading someone's booklist necessarily is their fastest pathway, and their economic perspective is exactly what stops them doing so.


A person that spends considerable portion of their time really learning new things will initially be slower at their work than people who only do work.

But over time this person will be getting better and better and better and at some point they will be able to do their job on the fraction of their time while still keeping momentum and learning more.

This has been my experience.

I have been spending 3/4 of my working hours learning new things for the past about 17 years after I realised this. The actual work takes a very tiny part of my day.

Software development really is a knowledge sport. Writing code is actually pretty small part of the task -- knowing what to write is the important part. If you are able to "write down the answer" so to speak, without spending much time getting there, you can be 10 or more times productive than everybody else, easily.


Um. In what I wrote I said a week or a month. That is not a large amount of time.

I think you missed the spirit of my writeup.


Agree that try/catch is verbose and not terribly ergonomic, but my solution has been to treat errors as values rather than exceptions, by default. It's much less painful to achieve this if you use a library with an implementation of a Result type, which I admit is a bit of a nuisance workaround, but worth it. I've recently been using: https://github.com/swan-io/boxed.

By far the greatest benefit is being able to sanely implement a type-safe API. To me, it is utter madness throwing custom extensions of the Error class arbitrarily deep in the call-stack, and then having a catch handler somewhere up the top hoping that each error case is matched and correctly translated to the intended http response (at least this seems to be a common alternative).


In the latter example, the question is really one of how tightly you wish to couple the application layer to that of the infrastructure (controller). Should the application logic be coupled to a http REST API (and thus map application errors to status codes etc), or does that belong in the controller?

I don't disagree that it's more practical, initially, as you've described it. However, I think it's important to point out the tradeoff rather than presenting it as purely more efficient. I've seen this approach result in poor separation of concerns and bloated use cases (`DoTheActualThing`) which become tedious to refactor, albeit in other languages.

One predictable side effect of the above, if you're working with junior engineers, is that they are likely going to write tests for the application logic with a dependency on the request / response as inputs, and asserting on status codes etc. I shudder to think how many lines of code I've read dedicated to mocking req/res that were never needed in the first place.


I don't think it's the worst thing in the world if you test your http.Handler implementation:

   w := httptest.NewRecorder()
   req := httptest.NewRequest("GET", "/foo", nil)
   ServeHTTP(w, req)
   if got, want := w.Code, http.StatusOK; got != want {
      t.Errorf("get /foo: status:\n  got: %v\n want: %v", got, want)
   }
   if got, want := w.Body.String(), "it worked"; got != want {  
      t.Errorf("get /foo: body:\n  got: %v\n want: %v", got, want)
   }
It leaves very little to the imagination as to whether or not ServeHTTP works, which is nice.

Complexity comes from generating requests and parsing the responses, and that is what leads to the desire to factor things out -- test the functions with their native data types instead of http.Request and http.Response. I think most people choose to factor things out to make that possible, but in the simplest of simple cases, many people just use httptest. It gets the job done.


I don't think it's poor to test http handling either, as a coarse grained integration test.

The problem I've seen is over-dependence on writing unit tests with mocks instead of biting the bullet and properly testing all the boundaries. I have seen folk end up with 1000+ tests, of which most are useless because the mocks make far too many assumptions, but are necessary because of the layer coupling.

This was mostly in Node though, where mocking the request/response gets done inconsistently, per framework. Go might have better tooling in that regard, and maybe that sways the equation a bit. IMO there's still merit to decoupling if there's any feasibility of e.g. migrating to GraphQL or another protocol without having to undergo an entire re-write.


> I don't think it's poor to test http handling either, as a coarse grained integration test.

Sorry to spring a mostly-unrelated question on you about this, but why do you call this an integration test? I recently interviewed three candidates in a row that described their tests in this way, and I thought it was odd, and now I see many people in this thread doing it also.

I would call this a functional or behavioral test. For me a key aspect of an integration test is that there's something "real" on at least two "sides" - otherwise what is it testing integration with? Is this some side-effect of a generation growing up with Spring's integration testing framework being used for all black-box testing?

(I will not comment about how often I see people referring to all test doubles as "mocks", as I have largely given up trying to bring clarity here...)


The reality is that I've heard unit, integration and e2e almost entirely used interchangeably, maybe except the former and latter. I don't think trying to nail down the terms to something concrete is necessarily a useful exercise. Attempts to do so, imo, make subjective sense in the terms of the individual's stack/deployment scenario.

To me, it's a contextual term much like 'single responsibility'. In this case, the two "sides" of an integration test are present. A consumer issues a request and a provider responds accordingly. The tests would ascertain that with variations to the client request, the provider behaves in the expected manner.

At which point you might point out that this sounds like an e2e test, but actually using the client web app, for example, might involve far more than a simple http client/library - in no small part because the provider can easily run a simple consumer in memory and avoid the network entirely. E2e tests tend to be far more fragile, so from the perspective of achieving practical continuous deployment, it's a useful distinction.

integration tests in this instance: varying HTTP requests (infrastructure layer) provoke correct behaviour in application layer.

e2e: intended client issues http requests under the correct conditions, which provokes certain provider behaviour, which client then actually utilises correctly.

This, to me, is why the most important part of testing is understanding the boundaries of the tests. Not worrying about their names.


This is interesting to me as someone who loves DDD but finds the tactical side hard to implement in Node.

A few questions stand out:

AFAICT implementing tactical DDD involves a lot of boilerplate code. That seems to be consistent with what you've written in your article on writing the book, mentioning "...we even built a Node.js framework as byproduct." If tackling DDD in Node requires so much work on something that isn't the businesses' core domain, and there's a lot of surface area for it to go wrong, why do it in Node?

As someone who writes predominantly TypeScript, static types feel like a much lower hanging fruit for alleviating some of the pain in tackling rich domains. As you've written the book in JavaScript, I'm curious whether you used plain JavaScript on the professional projects?

Also curious to hear from someone who has done tactical DDD in Java or C#: do you find a good supporting framework essential?


Could you explain which aspects specifically require a lot of boilerplate code? Judging from my experience, I think the tactical DDD patterns are straight forward to implement. I didn't mention this in my post, but the Node.js framework was only concerned with CQRS & Event Sourcing.

I fully agree with you that (static) types are inevitable for tackling rich/complex domains adequately. Personally, I would often recommend the use of TypeScript over plain JavaScript.

As for the book, when I started working on it, I was using plain JavaScript. Over the course of the past four years, there were times, when I considered to rewrite the DDD parts with TypeScript. My current plan is to add an appendix or integrate additional content about (static) typing.

While types are an important aspect for the Domain layer, I think the rest of the book works fine without TypeScript. Its absence might even help to keep the code example more concise.

> I'm curious whether you used plain JavaScript on the professional projects?

Yes, in both relevant projects we were using plain JavaScript (or CoffeeScript). We had many runtime type checks and a high test coverage to overcome the lack of static types. TypeScript would have definitely been the better choice.


> Could you explain which aspects specifically require a lot of boilerplate code?

Probably the most frustrating part for me is tackling domain events and the implementation of the aggregate root. How have you tackled broadcasting domain events when an aggregate changes in a way that might be of interest to other bounded contexts?


Without Event Sourcing, I would say there is almost no boilerplate code for aggregates themselves or their root Entities. After all, an Entity can be implemented as plain old class or object. For event-sourced aggregates, it depends on the implementation style. With OOP, there might the typical AggregateRoot base class. With a more functional style, there can even be less of boilerplate code.

About the Domain Event publishing, I'm afraid I don't understand the question. What challenges did you face? Is it about event-sourced aggregates?


If you have an aggregate root, it is responsible for knowing when changes to the aggregate have occurred, and thus making those visible.

What is the mechanism for it to do so? Does the aggregate root retain the events and decide when to dispatch them (likely after successful persistence to a datastore)?

Once ready for dispatch, what do you use to handle the events? Some kind of DomainEvents class ala https://udidahan.com/2009/06/14/domain-events-salvation/ ?

This is the boilerplate I'm referring to.


The Aggregate Root must track all Domain Events that occur upon executing an action and make them accessible somehow. Only after the successful persistence of the associated state change (and/or the events), they can be published via an Event Bus. The Aggregate itself only expresses that Domain Events occurred, but does not deal with publishing.

The article from Udi Dahan shows Domain Event handlers that "will be run on the same thread within the same transaction". This implementation is not suitable for handlers that affect other transactions. Udi explains that "you should avoid performing any blocking activities, like using SMTP or web services. Instead, prefer using one-way messaging to communicate to something else which does those blocking activities." What he is referring to as "one-way messaging" is the actual event publishing in my opinion and must guarantee event delivery.

In my book, the example implementations store all occurred Domain Events together with the Aggregate state. This is because it resembles the later use of Event Sourcing. There is a separate component that watches for changes in Aggregate data and publishes new Domain Events. After publishing, the events are marked accordingly.

Regardless of where newly occurred Domain Events are retained, it should be persistent. Many times, the events are stored in a separate table inside the same store as the affected Aggregate. Later, they are retrieved, published and either marked or deleted. This is called a Transactional Outbox: https://microservices.io/patterns/data/transactional-outbox.... The approach ensures that both the Aggregate change and the request to publish an event happen within the same transaction. The actual publishing happens in a separate one. This way, you get guaranteed event delivery or more specifically "at least once" delivery.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: