Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm always shocked at how reluctant sites are to actually ship less code, and I think some of this comes down to a needed shift thinking in what an application is, what modules are and how to use imports.

One thing I've heard recently is "My app is big, so it has a lot of code, that's not going to change so make the parser faster or let me precompile".

The problem with this is thinking that an app is a monolith. An app is really a collection of features, of different sizes, with difference dependencies, activated at different times. Usually features are activated via URLs or user input. Don't load them until needed, and now you don't worry about the size of your app, but the size of the features.

This thinking might stem directly from misuse of imports. It seems like many devs think an import means something along the lines of "I'll need to use this code at some point and need a reference to it". But what an import really means is "I need this other module for the importing module to even _initialize_". You shouldn't statically import a module unless you need it _now_. Otherwise, dynamically import a module that defines a feature, when that feature is needed. Each feature/screen should only statically import what it needs to initialize the critical parts of the feature, and everything else should be dynamic.

In ES2015 this is quite easy. Reduce the number of these:

    import * as foo from '../foo.js';
and use these as much as possible:

    const foo = await import('../foo.js');
Then use a bundler that's dynamic import aware and doesn't bundle them unnecessarily. Boom, less JS on startup.


You still need good solutions for bundling assets because network connections aren't free.

The approach you advocate can have adverse side-effects to web performance. It would help with the initial load time due to reduced initial JS, but if you end up loading a few or dozens of additional JS modules asynchronously you're talking about a lot of extra HTTP requests. Over HTTP 1 that's a big problem, and even over HTTP 2 each additional asset sent over a multi-plexed connection has overhead (~1ms or more).


Where's this ~1ms overhead for an additional asset on a HTTP2 connection coming from? Do you have a reference to a benchmark or something that demonstrates it?


IDK what calvin had in mind, but the client pull model you suggest can require a lot of round trips. Request a, parse a, execute a, request b, parse b, execute b, request c, parse c, execute c.

Of course, you could always do server push, but hey...that's pretty close to what a single bundled file is :)


I don't have a specific link/reference I can provide you, but it came out during Q&A at one of the HTTP/2 presentations at Velocity Conference 2016.


I did say use a bundler :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: