Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm sorry, but this reply doesn't make any sense to me. If applications have to cache something, they should use the file system, which would not affect their resident memory at all. Maybe I'm just old school, but where I come from operating systems handle the memory hierarchy, not applications.


They cache it in memory to avoid latency. Writing to and reading from disk increases latency. Even crossing the kernel boundary will increase latency.

Modern web browsers have architectures similar to OSes at this point - because they have requirements similar to OSes. I think it's natural that they will take on some of the same responsibilities.


If it's cached to the filesystem, it'll be handled by the file cache, under the kernel's control with the rest of RAM. Maybe browser makers have a good reason for thinking they can do better than the OS, I don't know, but having two systems trying to do the same job with the same resources sounds like a recipe for instability and inefficiency to me.


If you have to cross the kernel boundary every time you want to access something in your cache, your "cache" is now much, much slower. Note that this applies even if the OS keeps the file in memory, and doesn't require going to disk.


Sure, but if accepting that cache slowdown makes the rest of the system more responsive and more useful for background tasks, it may be worthwhile. It's a trade-off, and the browser makers have every incentive to be as selfish as possible.


Where I come from, what you're describing is called pre-fetching, not caching. That's why your earlier comment confused me.


No, I'm talking about caching: keeping data around that you have previously used, assuming you will use it again. However, in order to support caching, you must pre-allocate memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: