Hacker Newsnew | past | comments | ask | show | jobs | submit | Sankozi's commentslogin

Emitting correct XHTML was not that hard. The biggest problem was that browsers supported plugins that could corrupt whole page. If you created XHTML webpage you had to handle bug reports caused by poorly written plugins.

I have opposite experience. Consistency is commonly enforced in bigger corporations while it's value is not that high (often negative). Lots of strategies/patterns promoted and blindly followed without a brief reflection that maybe this is a bad solution for certain problems. TDD, onion/hexagonal architecture, SPA, React, etc.


"In large codebases, consistency is more important than “good design”" - this is completely opposite from my experience. There is some value in consistency within single module but consistency in a large codebase is a big mistake (unless in extremely rare case that code base consists entirely of very similar modules).

Modules with different requirements should not have single consistent codebase. Testing strategy, application architecture, even naming should be different across different modules.


People using it wrong. It definitely should not be that popular. 95% of times when I see it I consider it a tech debt.

You should be using it in rare cases when you want to verify very complex code that needs to be working with strict requirements (like calling order is specified, or some calls cannot be made during execution of the method).

Usually it is used for pointless unit tests of simple intermediary layers to test if call is delegated correctly to deeper layer. Those tests usually have negative value (they test very little but make any modification much harder).


In the same post you are arguing for and against "slippery slope".

Either it is possible to easy change law to make it worse ("slippery slope" is valid objection) or changing law is "much harder than doing nothing"("slippery slope" is a fallacy).


It is possible to set version ranges but it is hard to see this in real world. Everyone is using pinned dependencies.

Version ranges are really bad idea which we can see in NPM.


This is not a good idea: this leads to longer build times and/or invalid builds (you build against different dependencies than declared in config).

Having dependency cache and build tool that knows where to look for it is much superior solution.


(p)npm manages both fine with the dependency directory structure.


This is literally not possible.

If you have local dependency repo and dependency manifest, during the build, you can either:

1. Check if local repo is in sync - correct build, takes more time

2. Skip the check - risky build, but fast

If the dependencies are only in the cache directory, you can have both - correct and fast builds.


I don't follow. In pnpm there's a global cache at ~/.pnpm with versioned packages and node_modules has symlinks to those. Dependencies are defined in package.json transitive dependencies are versioned and SHA512-hashed in pnpm-lock.yaml.

E.g.

  $ ls -l ./node_modules/better-sqlite3
  ... node_modules/better-sqlite3 -> .pnpm/better-sqlite3@12.4.1/node_modules/better-sqlite3


You still need to have those symlinks checked. For example you switch branch to one with updated package.json, now you need either to check symlinks or you risk to have incorrect build.

Introducing a directory that needs to stay in sync with dependency manifest will always lead to such problems. It is good that Python developers do not want to repeat such mistake.


Just run `pnpm install` after switching the branch. Or add `pnpm install` into the build step. And many build tools will do that automatically. If the deps are in sync with the manifest, that takes typically less than a second.

This is a problem I've never encountered in practice. And it's not like you don't have to update the dependencies in Python if they are different per-branch.


The thing is you don't need to do this in "normal" system where dependencies are stored in local cache and build tool is using them during the tasks.

If JS had a proper build tool then creating such directory inside project would not be needed. Instead in JS world you have only dependency downloader with some crude script runner.


Flash performance is still better than current web stack's. Probably will always be - you could write non trivial games that would work on 128MB memory machine. Currently single browser tab with simple page can take more than that.


Nobody is planning for this magical instant 1 to 1 switch to EV. It will happen gradually.

Most of the world is playing catch up with Norway (97% EV market share) - if their grid will handle the transition then it is possible. If it will not then others will prepare better.


Where is it happening?

Where buying a car is really expensive? The only places that come in mind are those packed cities that require a parking place for a car. Nothing to do with EV.


>Where buying a car is really expensive?

Pretty much world-wide. The cost of new cars has risen several times faster than inflation for at least a few years now.


You don't have to buy a new car. Used ones are dirt cheap (soon this will include EV too).


Not sure about the situation where you live, but "dirt cheap" 2nd hand cars aren't a thing any more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: