Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to build a low-tech website? (2018) (lowtechmagazine.com)
379 points by okasaki on Nov 2, 2021 | hide | past | favorite | 216 comments


This bit really caught my attention:

> In contrast, this website runs on an off-the-grid solar power system with its own energy storage, and will go off-line during longer periods of cloudy weather. Less than 100% reliability is essential for the sustainability of an off-the-grid solar system, because above a certain threshold the fossil fuel energy used for producing and replacing the batteries is higher than the fossil fuel energy saved by the solar panels.

This, in particular, is an amazing take:

> Less than 100% reliability is essential for the sustainability of an off-the-grid solar system

Saying that sacrificing uptime is a good thing for achieving a particular goal is a take that's so wildly out of line with what most of the industry believes is refreshing, because it really makes you think. What would the world look like, if instead of being obsessed with SRE we simply assumed or accepted that there will be downtime, and treated it like something perfectly normal. Maybe a nice message at ingress/load balancer level, telling everyone that the site will be back in a bit.

I have no illusions that any company would want something like that, or that it's even feasible for many industries, e.g. healthcare, though the Latvian "e-health" system is an accidental experiment of this at the cost of around 15 million euros of taxpayer money: https://www-lsm-lv.translate.goog/raksts/zinas/latvija/par-e... .

However, if companies have working hours, why couldn't websites? What a funny thought.

Edit: on an unrelated note, it's wild how different setups can be.

For example:

> The web server uses between 1 and 2.5 watts of power...

While at the same time, i have two servers with 200GEs (essentially consumer hardware) running in my homelab, with a TDP of 35W each, though with the UPS and its inefficiencies, as well as the 4G router, the total energy consumption that i have to deal with is around 100W. Granted, i also use them for backups and as CI nodes, but there's still a difference of 20 - 50 times more power usage between one of my servers and one of theirs. I guess one can definitely talk about the differences between x86 and ARM there as well.


One of my pet-peeves is how common it is for applications these days to assume you always have a reliable Internet connection. The shift to webapps makes the whole system really fragile. All it takes for an app to become unusable for everyone is a failure in just a couple nodes in the system.

I don't use GitHub for work, but it's always amusing to see the cries of people who do, when it does down from time to time. The ability of many companies around the world to continue operating depends on just one company continuously making the right calls.

In the future, when Ubisoft's servers go down, I won't be able to play Assassin's Creed games anymore, even though the servers don't really provide any value to me as a player.

On Spotify I keep downloading the same songs over and over again. Poisoning the environment more than if I got a CD. These days I try to buy songs to download once and then play in my favourite offline music player.

And of course SSH, the "secure shell", application which doesn't really optimize for shells. Responsiveness of typing things in the shell prompt relies on the quality of the Internet connection and the CPU load on the server machine - a process during which the server doesn't really have anything interesting to say.

I'm working on the side on creating a company and those opinions lead me to choose the harder path of desktop applications which don't need Internet connection for anything that doesn't require it by definition. They're more resilient and comfortable to use. I'll probably fail, but I really can't make any other choice at the moment, I need to at least try and see for myself.


When those Ubisoft servers go down for good, the pirated copies of those games will be the ones preserved forever instead of the release ones.


See Driver: San Francisco

One of my favorite games, if not overall favorite. It's impossible to find a new copy now. Any legal copy you find is likely only a CD-key from some reseller site, or a physical disc for PS3/Xbox 360.

It's not available from Ubisoft's store, nor Steam's store (where I purchased my copy).


There are more responsive alternatives to ssh, like mosh https://mosh.org/ (discussed recently https://news.ycombinator.com/item?id=28150287)

And uhm.. eternal terminal? https://eternalterminal.dev/ (discussed in 2019 https://news.ycombinator.com/item?id=21640200)

And, perhaps others? There's this article https://console.dev/articles/ssh-alternatives-for-mobile-low... (edit: I just submitted it to HN https://news.ycombinator.com/item?id=29081008)


> I'm working on the side on creating a company and those opinions lead me to choose the harder path of desktop applications which don't need Internet connection for anything that doesn't require it by definition. They're more resilient and comfortable to use. I'll probably fail, but I really can't make any other choice at the moment, I need to at least try and see for myself.

Very refreshing, I really appreciate that choice. As humans in tech I'm sure it's frustrating that everything requires accounts and constantly online services, since we can imagine the software working perfectly fine without it. Games are especially nefarious with account requirements, but even different work software will fail without an account. Using a modern 3D art authoring pipeline can easily have you logged in to two or three services while you work.


> On Spotify I keep downloading the same songs over and over again. Poisoning the environment more than if I got a CD. These days I try to buy songs to download once and then play in my favourite offline music player.

I buy all of my music through bandcamp, download the FLAC files to my home server and play those. I prefer to pay an artist for their work directly or a label who might offer a physical or shirt/patch/etc. I also like the concept of buying an album because often times the album is a complete work, not a loose collection.

I tried spotify for a few days and it was a poor experience as the suggestions were terrible, the selection weak, plus they barley pay small artists. No thanks.


Totally agree with your points, but for SSH in particular mosh (https://mosh.org/) specifically answers the issue of SSH responsivity. The client optimistically renders what should happen on the server, and the server confirms when it can.

> I'm working on the side on creating a company and those opinions lead me to choose the harder path of desktop applications which don't need Internet connection for anything that doesn't require it by definition. They're more resilient and comfortable to use. I'll probably fail, but I really can't make any other choice at the moment, I need to at least try and see for myself.

Is that in line with the offline-first movement ? I can only cheer you on this journey, because constant connectivity eats our focus and our planet. I'd love to see what you come up with :)


Although a tad bit of a tangent, I've been fed up with the entire new past couple generations of game consoles needing to be online, and having no physical media. I've not had a gaming console since the Xbox 360 era due to this. I do not like needing to be always online to play a game, and I do not like how it can need to be patched and updated on a whim. I want to buy a game, play it how I want, and have the physical copy. I want it to be a finished product when I buy it. Devices and games (webapps, modern console games, even smart phones apps and games) that require being online, I hope to only be a fad. I won't support it.

As pertains to lowtechmagazine - Love these guys, love the idea, love the minimalist culture about it. Fully support it!


FWIW, the Playstation 4 and now 5 (disc edition only of course) make very sure that in the vast majority of cases, what comes on the disc is a properly working game able to be played to completion without the console ever seeing the Internet.

This is not really the case on Xbox.


> I don't use GitHub for work, but it's always amusing to see the cries of people who do, when it does down from time to time. The ability of many companies around the world to continue operating depends on just one company continuously making the right calls.

Your general point is taken, but, yet, in this particular case, git is the (or "a") perfect answer to the host not being available. Other remotes could be used to route around the failure.


Git - yes, GitHub (issue tracker + CI) - no. That's one of my favourite parts of Git actually - how much you can do offline.

In the companies I've worked for so far, issue tracker and CI have been self-hosted along with the rest of company infra. One fewer point of failure.


The internet was designed to survive any single point of failure. Over the past 20 years, we've seen the internet become massively reliant on a handful of big players. Now when there's an outage, we see thousands of websites go down at the same time.


I tried using the Libby app, my local library system uses it for digital "loans".

You can download your checkout, but the app requires an internet connection. No signal, no audiobook.


"Less than 100% reliability is essential for the sustainability of an off-the-grid solar system"

100% reliability is a terrible, dangerous lie.

People make mistakes.This is well and good when we are talking about entertainment and Instagram.

However we are now adding fragile Internet-connected code in critical infrastructure: cashless society means that if your bank is down, you can't get food.

All cycling stands in London need to talk to their server, and when it's down, you lost ability to get transport.

Same for public transport and Uber - if they fail in during a winter night, someone somewhere will freeze to death.

The door dialing in my house needs the internet to do IP-calling to your phone. They were digging up my street and cut the cable, and I was stuck in the cold for an hour.

This is going to spread - imagine a failure of the system managing running water, or god forbid the suer?


> Less than 100% reliability is essential

This is actually a take most SRE's would / should believe. Every added 9 to the reliability increases the price exponentially. Finding the correct level of reliability is something most companies should focus more on, because sometimes a single physical machine that could go down once a year for a few hours is perfectly capable of providing all the resources a medium seized business could need. Proper backups, monitoring and recovery runbooks can even decrease the downtime of such a simple system to minutes, while easily saving you maybe thousands per month.


I was surprised by the difficulty in getting a company to accept a target of “three nines five” (0.9995) at a time when they were growing rapidly and launching new physical and digital products on a rapid and continuous basis. I prevailed, but what I expected would be a five minute conversation took a couple 45 minute discussions (reducing the work uptime of people in those discussions to 0.9993 for the year... :) )

Slowing your young company down in order to turn 0.9995 to 0.9998 is almost always a terrible trade. Even turning 0.995 to 0.999 is hard to justify in most places. (That improvement saves about 35 hours of downtime per year.)


Is there a rigorous framework to arrive at those targets? How do you know what you built has 0.9995 uptime, and not just 0.99?


By far the easiest way is to measure it after the fact, but I know that’s not what you’re asking... :)

We did do some "analysis", meaning that we made some underlying guesses and multiplied them together, but the real value is in getting people to think that 1.000 is not the actual goal-line, then tracking and doing RCA on all the outages, bucketing them into categories so you know whether to invest more in diverse networking, software testing, HA for DB servers, failover sites, zero downtime releases, etc.

Many times, you can avoid entire massive projects (“we need to be hosted in 2 geographically diverse data centers for availability” “uh, no we don’t; we have a budget of 262 minutes of downtime per year and that project will save us less than 60 minutes per year on average, using the best case assumption that our own changes to implement it cause no downtime”)


If you're a large corporation one way to get a good idea is by having lots and lots of fire drills around various disaster scenarios and time how long actual service restoration and re-routing takes. For other companies it's just guess work.


Around 2012-2013 I was working on an online education platform. We had a whole web application that would serve video content, collect student answers and analyze in real-time(ish) the student progress in order to find out the next action for the student - e.g, if the student starts to get questions wrong that they were getting right before, we'd take it as a sign of fatigue and would recommend them to take a break. Or if the student was showing that has mastered a topic, we would jump ahead in the lesson to something else that needed more work.

So we needed a web server, a database, a queue system to run these heuristics and we needed to host/distribute ~100GB worth of content, most of it video.

We were bootstrapping, so I was trying to (1) save as much as possible on operational costs and (2) punt on all the "scaling issues" that would require more of my devops time that would be better spent developing and adding more features. I deployed the whole system on a single server from Hetzner: Django app, Postgresql, Redis for caching and session management, RabbitMQ for celery. All in one machine with 32GB of RAM and a RAID system with enough capacity to hold the data. I think it was costing us less than 50€/month. That is all we needed to (easily) serve ~800 students and the staff who would author new content.

In the end we delivered everything we promised to our first customer, but we were not able to grow our revenue as much as we expected, so by end of 2013 we just put the whole company on the backburner, got a small maintenance contract with the main customer and went on to find another jobs.

From end-2013 until 2018, I needed only to make sure that our domains and SSL certificates were up-to-date every six months, upgrade django packages in case of security issues and deal with ONE incident (in 2016 IIRC) where a disk failure put the array in degraded mode, which I solved by getting a new server at Hetzner (better specs and cheaper, after all those years), warning the customer that the service would be taken offline for a couple of hours later in the day, rsyncing the content, restoring the database and redeploying the application with the fabric script.

This is one the projects that I am most proud of what was accomplished given all the constraints and made me realize the difference between a Software Developer and an Engineer. Yet, it translates to a very poor entry on an CV. We are too used to ask on interviews what people have done and what technologies they have used, but we rarely ask about the moments where it was best to avoid doing something.


Esp if you consider bare metal servers. I'm currently paying 45€ for a Ryzen server with 64gb ECC ram and 1tb nvme storage (raid1).

The speed is incredible if compared to ec2 or root server performance from other vendors. Even if they've dedicated resources.


The cache misses alone mean the cloud should be cheaper than bare metal. In general you can buy outright any cloud service for about 3 months of the price of the cloud.

Why anyone would run their pointer chasing code in a heavy cache eviction environment is beyond me. The code is slow to start with, and then you make sure that none of your data is in the cache. Why you'd pay 10x for slower hardware makes no sense.

What people should be doing is running on bare metal and turning off all the garbage meltdown protections that kill performance. If you're not a cloud provider and you're allowing people to execute arbitrary code on your hardware, you've got much bigger problems than meltdown.


> In general you can buy outright any cloud service for about 3 months of the price of the cloud.

If you compare on demand lrice for cloud, sure. Reserved and spot instances change the balance significantly. If you're running a handful of servers, sure it's a no brainer. But when you start dealing with any sort of human cost (operations, it) the savings you get are dwarfed by the human costs because that's what you're paying for with aws and azure. And, when you're at mega scale you're negotiating separate deals anyway.

That's also not considering the value of the combined offerings. On aws for example, I can spin up a kubernetes cluster with rolling updates pushed by GitHub actions in less time than it took me to write this comment, and it will be usable and modifiable by anyone who has experience with aws or k8s. the cost savings of running my own infrastructure and managing all the moving pieces is dwarfed by the fact that the service provided is widely used and well known.


> I'm currently paying 45€ for a Ryzen server with 64gb ECC ram and 1tb nvme storage (raid1).

That does sound like a really good deal!

Until now i've only been using VPSes (apart from homelab servers as CI nodes etc.) because they're cheaper for the smaller sizes, but for comparison's sake, the cheapest VPS provider's (that i know of and trust) offering with 64 GB of RAM and 640 GB of storage would cost ~260 euros a month: https://www.time4vps.com/?affid=5294

Well, i guess there's also other VPS providers out there that can nearly match the price, like Contabo, though they do have mixed reviews: https://contabo.com/en/ (personally i just found their UI to be extremely dated and there are setup fees, but otherwise they were decent), though even then they'd cost anywhere from 30 - 90 euros a month.


yeah, low resource VPSs are great value if you don't mind the performance too much.

I was using a Netcup root server with 2 dedicated cores/8gb ram before i switched to my current hetzner baremetal server. It only cost ~7€ per month, so much better value if i you don't mind that everything just takes a little longer.

i dont think i'll ever go back though. even using the shell on the baremetal server is so much more responsive vs the vps.

but for what its worth: you can get a VPS with similar resources (16 cores, 64gb ram, 2tb ssd) for 40€ with netcup.


And anything that is static and needs to be up can just be cached at the edge somewhere, which is peanuts really, and means that if your bare metal goes down, you can still keep something up


May I ask you were you rent it?


I pay about the same for a server from Hetzner - from the server auctions (https://www.hetzner.com/sb)

- AMD Ryzen 5 3600 6-Core Processor (Cores 12)

- 64GB

- 2 x 2TB HDDs


That's cheap.

We really underestimate the costs of running in the cloud.


It's mostly marketing of aws employees and professionals that built their career around aws price and complexity.

A great idea to be honest, the market willing to overpay for server will probably be able to pay more to you.


I guess a lot of the cloud costs are due to not having to really manage anything yourself - you are essentially paying for a team of people to keep the 'server' up and running and make sure that things 'Just Work' (largely, anyway)


No amount of money makes a system 100% reliable.

On small platforms we are still stuck into the 1990's approach of having one reliable system.

We need distributed[1] systems and protocols even in small applications. Easy to use and self-healing.

[1] No, I'm not talking about blockchains


My former employer use to target 99% uptime for non-essential systems. It made a ton of sense, the cost of downtime was often incredibly low, while the cost and complexity of making it 4 9s was really high.


There's a huge jump in cost and operational style, to go from two nines to three, because it means you have to have 24/7 support coverage or an on call rotation (and good alerting, or else it's for naught) for nights. Two nines just means you need someone to check their messages sometimes, during the day, on weekends. One nine, and you can forget about the weekends, too—and that's actually sorta OK for certain applications.

Three nines also means you can't afford to intentionally take a system down to work on it, or you'll burn all your "oopsie" downtime. That means a ton more work in infrastructure and deployment processes, than two nines.


If you’re not a global enterprise, you just don’t respond off hours.


The Google SRE book, which I think is a reasonable reflection of SRE culture generally, actually mentions this in the very first chapter.


B&H Photo website observed the sabbath and won’t sell you anything during.

There are things which are actually life critical that you want to be up and then there’s everything else, many people have really strange opinions about downtime and treat it like some kind of immorality.

I don’t really buy the power arguments about extra capacity but that’s a big thing.


In The Netherlands there are (or were) Christian political parties shutting down their websites on Sunday automatically.


For real? Do they request medical services to be up on Sunday or the Lord protects them? :-)


I can't speak to the Christians, but even the most orthodox Jews I know support emergency services on the Sabbath -- and they'd absolutely drive someone to the emergency room if it came to that. Doing your best to observe a day of rest doesn't mean you can't choose to do work if it serves a greater good. (Whereas I'm pretty sure people won't die if they have to wait an extra day to order their camera equipment.)


In the scenario you mentioned sounds tricky...

https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1553-2712....


That's a great article and spot on.

And it's definitely complicated. Just because work necessary to preserve human life is allowed doesn't mean anything done in a hospital is therefore permitted -- a sizeable fraction of the work in a hospital is about bookkeeping, billing, insurance, scheduling followups, paperwork, etc which can wait a day or two without anyone dying.


In NYC, most hospitals have a special elevator that, on the Sabbath, continually stops on every floor. This is to help orthodox Jews comply with their commitment to not turning electric devices on or off--they just board the elevator at the floor they want and get off when it arrives at their destination, without ever pushing any buttons.


I love this website and the experiment. But you implying that more service downtimes is somehow progress of civilization is ridiculous. Civilization progress needs more reliability, not less.


That's a very technologist perspective. But what about a more philosophical take? Life is not about technology or work. Life is about living. A bit of downtime and slowdown can be very refreshing.


Sure, but by taking your website down at certain times (or allowing the system to take itself down), you force your ideas about slowing down onto others who may or may not have time for that right now. It is actively hindering other people from trying to live the way they want (by reading about solar powered websites, in this case).

It does not really matter for the Low Technology website of course, but if you get into a serious car accident and the surgeon is like "sorry but our solar powered operating theater is down right now, that's just life sometimes" that would be a very unwelcome thing to hear.


There exists a middle ground between everyone gets five nines of reliability (expensive) and everyone only gets three nines of reliability (people die in hospitals).

If we focus the same resources on providing reliability where it is needed most, we could actually improve reliability of outcomes. That is, for the same cost of providing a few hours of battery backup for the entire grid we could provide weeks of battery backup for hospitals and data centers.


Your example which is why everyone in this thread has been bending over backwards to emphasize that certain services (like medical ones) should aim for full reliability.

Suppose twitter/reddit/HN shut down for 10 hours a day.


Which 10 hours? Timezone differences alone would shut out large portions of the world. A service which is only up when I'm asleep or at work is not very useful to me.


And 90% of Twitter's problems would be solved if they weren't trying to be maximally "useful" to every single person on earth all the time.

10 hours was just an arbitrary number, but there are plenty of more creative solutions. Shut down every March, or Weekday mornings, or every other hour.


Philosophical take is useless here. This thinking only works on a primitive society, where life is wrapped around a relatively small group of people. Today we have supply chains that have 20+ components. If they'd have 5% downtime each, the entire chain would only work 35% of time.


> A bit of downtime and slowdown can be very refreshing.

That's up to the customer to decide. Insofar as we live in a world where customers decide the fate of businesses in a do or die way, I feel the conclusion is obvious. Either the customer shares your philosophy that your downtime is part of an enjoyable rhythm of life, or you die.


Not everything on the internet is for-profit. The low-tech magazine website is a prime example of that.


That's a good point, but note that providing reliability adds production cost as well. Services and businesses with guaranteed high reliability cost significantly more than those with low reliability, and I'd expect that cost to be reflected in their prices.

That's why guaranteed overnight delivery generally costs more than waiting a week or two for your package.

I work at a company that provides business services, and some customers are fine dealing with irregular availability of resources if it means the service cost an order of magnitude less.

It's not common in power yet, but in the future a hospital might pay $1/kWh for high-uptime power whereas someone else might pay $0.01/kWh for excess capacity provided as available to charge their car.


Most websites aren't businesses. How do you vote with your wallet to decide you accept that a website slows down ? Even when a website is a business, you pay for the whole service; there is no competition on uptime alone.


"Technologist perspective" - BS, you can only afford to entertain your philosophical takes because of strong reliability of the services around you, from your grocery store being full of goods to electricity at your place, to HN being online.


To me, that sounds like perhaps choosing a lifestyle which has significant reliance on those services, while at the same time not accounting for any possible downtime and expecting everything to work - that's akin to running your application on a single VPS with no backups.

If there are any issues with the grocery store or the technologies in it, then perhaps it's possible to pay in cash. If there are larger problems with the grocery store's supply chain, i can just go outside to my greenhouse and get some fresh veggies. If i don't have a greenhouse, i have a bunch of beans and other canned goods in my cellar, as well as crackers and so on. If i don't have my cellar, i have a bunch of frozen meat in the fridge. If i don't have said meat, i have a rifle and a forest with animals for that. And if i don't have any of that, i can always just go to my neighbours and ask whether they have anything to share (or offer the same sort of help, should they be dealing with the same hypothetical situation, instead of me; things would only break down when everyone would be starving).

If there are electricity interruptions, i have a few UPSes in place. If the UPSes run out of power, then i have a diesel generator in the cellar (with the appropriate ventilation system in place). If i don't have any diesel, my house has a stove that burns wood and has central heating with radiators, as well as candles or a petroleum lamp, or even a flashlight with a generator for lights. And if HN isn't online, i'll just read a book, since there is a shelf with some lovely literature to be enjoyed, or will simply write something in my diary, as people do.

I'm saying this not to discredit the fact that many people choose to live in the larger cities, since there are definitely different advantages to that choice, however at the same time you have to consider the overall trends in society, when you take things as electricity or a stable food supply for granted, which may not always be the case.

That said, there are relatively few things that are truly time sensitive, such as having a supply of important life sustaining medication. I wouldn't expect my grocery store to have a 99.99% uptime, nor would i expect that even out of my electricity supply, hence the UPSes and generator, and HN going down wouldn't be the end of the world either.

And when we are talking about the greater supply chains that we don't see in our daily lives, honestly there always should be a business continuity plan in place - if your computer system goes down because of Y2K, a solar flare or the data center burning down, then you should be able to keep on trucking with calculators and some paper. Anything less is not all that responsible, if we're really talking about important domains.


> Civilization progress needs more reliability

Not necessarily. Similar to software, each nine costs (in money, pollution, effort, etc) at least an order of magnitude more than the last, and at some point it's more efficient to focus on resilience elsewhere in the system.

Wind and solar are great but they're far too irregular to rely on so in places without hydro storage you're often left with either chemical batteries, which would be prohibitively expensive to support the entire grid for a few days, or fossil fuel peaker plants, which will ultimately result in coastal flooding.

One alternative has been to use demand response to pay people to cut back and dynamic (wholesale) prices to encourage people to conserve when supplies are low. Perhaps at some point electricity providers could even institute different prices for different levels of service. That is, one would pay $1/kWh for any capacity that requires less than an hour of downtime per year like servers and medical equipment, but $0.01/kWh for excess capacity used for vehicle charging that is irregularly available day-to-day. The "smart grid" and meters necessary to support this kind of thing don't exist, but may in the future, and adapting to this kind of irregularity would allow us to have a much lower-cost electricity grid.


What I'm implying is that it's interesting to consider an alternative point of view: where certain non-critical (at least to the greater society) systems would be treated as such, instead of paging people at 6 AM because a CRUD is down.

Reliability often comes at the expense of making some people's lives much less pleasant (especially in situations of underfunding or not having the corporation back any of your SRE initiatives), or, alternatively, with exponentially increasing costs, since all of the sudden everything needs to be high availability and have redundancy in place.

Consider the recent Roblox outage. No one died because of it, yet countless engineers were working around the clock to ensure the profitable operation of the platform.

When the Latvian Electronic Tax Declaration system goes down on the first day of the year when you can declare your taxes (basically every year), the ministry representatives shrug their shoulders and essentially say: "Nah, we can't make it cope with the load. Just wait a day." And you know, maybe they can't, but at the same time even so everyone still manages to submit their taxes eventually.

Thus, perhaps it's worth it to consider when and where we need the scalability and high availability, e.g. government centralized auth services under Latvija.lv which also went down because of the load and as a consequence people had to wait longer to sign up for COVID vaccines and couldn't use other systems until it was resolved: https://blog.kronis.dev/articles/manavakcina-lv-and-apturico...

Then, we should focus on those more important cases. Now, I'm not trying to suggest that the incompetence or dismissive attitudes are (always) okay, instead, I'm merely suggesting that we should consider our priorities and wonder what a more laid back environment would look like.

I still recall once sending a hotfix for a prod issue at 3 AM. I wonder what things would look like if instead of worrying whether a certain business process can take place at a certain date, the attitude would have been more like: "We'll just tell everyone to wait a day or two. It's not like our product is time sensitive."


You're talking about situations of what is essentially a de jure monopoly on a critical need. People sign up for COVID vaccines and file their taxes to avoid being screwed.

In places with competition the customer is the one who decides whether you are screwed. Is downtime from social media or news rather trivial? The business deadliness of downtime is different from the societal deadliness of downtime, but what significance this takes on is up to the customer to say, and the customer develops expectations based on the competition.


I am certain that people have missed filing their taxes because of the downtime. They scheduled time for it, got frustrated that it didn't work, and then forgot about it. Then realized too late that they didn't file their tax return. It almost happened to me because a bank had a scheduled downtime when I was doing my taxes and I couldn't finish it at the time. I only remembered a few hours before the real deadline...

Also, I got a vaccine almost a month later because our vaccine sign up page also had technical difficulties. I sat around in queue for 90 minutes to do an online sign up and all the times were already gone. The next time I tried and saw the queue I just closed the browser.

Also, these are things that you can go to prison for if you mismanage it. People are much more likely to put up with these screw ups, because they have to.


This. Also because if your energy source is solar, who cares if you're using a bit more power to provide extra reliability?

Heck, even just charging a battery with the extra solar power would provide better reliability.

I totally agree on saving power for users given that's a energy cost you pay N times - albeit I'd prefer some better defaults (in terms of colours and fonts).

I guess we need a better, lightweight browser.


> Less than 100% reliability is essential for the sustainability of an off-the-grid solar system, because above a certain threshold the fossil fuel energy used for producing and replacing the batteries is higher than the fossil fuel energy saved by the solar panels.

I wonder if whoever wrote this has actually done the calculations. These days it's not really difficult to provide fairly reliable solar power by overgenerating while still beating fossil fuels - perhaps not beating them as much as with zero overgeneration, but beating them anyway. So I'd take the quoted text with a considerably large grain of salt.


> However, if companies have working hours, why couldn't websites? What a funny thought.

Our banks have those, and it's a really shitty experience.

Yeah, sure, e-bank works, you pay, but if the person receiving the money is at another bank, that my bank has no extra contract with, I have to wait until the next workday for the funds to be transfered. If I want to buy something with a bank transfer on a friday night, the funds won't get transfered until monday morning.


You are in the US? In Norway it typically happens within the same day but my bank (SBanken.no) also offers instant transfers to other banks even at the weekend.

If both sender and recipient use a mobile phone service like Vipps then the recipient also receives the money immediately (within seconds) regardless of whether they use the same bank. I think similar services exist in other countries too.


Spaniard here. Yes, we have something similar like that.


> Saying that sacrificing uptime is a good thing for achieving a particular goal is a take that's so wildly out of line with what most of the industry believes is refreshing, because it really makes you think. What would the world look like, if instead of being obsessed with SRE we simply assumed or accepted that there will be downtime, and treated it like something perfectly normal. Maybe a nice message at ingress/load balancer level, telling everyone that the site will be back in a bit.

The difference is that here, the downtime saves energy. "Normal" downtime is unintentional. Servers are still running, just in an error state. They may be constantly attempting to restart themselves at some level (looking at you, k8s). Users are still trying to hit them and their requests may be partially handled. Traffic may be pushed over to newly spun up instances.

We can accept that things happen and running large internet services is hard, and SREs and developers everywhere will rejoice, but it won't save energy.


> However, if companies have working hours, why couldn't websites? What a funny thought.

Because of day/night cycle these working hours would usually overlap. This would put people who can’t afford to do their administrative business on the internet during work hours at a disadvantage.


> Saying that sacrificing uptime is a good thing for achieving a particular goal is a take that's so wildly out of line with what most of the industry believes is refreshing, because it really makes you think. What would the world look like, if instead of being obsessed with SRE we simply assumed or accepted that there will be downtime, and treated it like something perfectly normal.

My bank's website has been down for maintenance for 24h this week-end. And French government website for declaring VAT will be down for a few hours tomorrow.

Many people still consider downtime as normal. It's not trendy to talk about it, but downtime isn't such a big deal. (Going down because you cannot handle your peak load is very bad though)


This depends.

If it's a planned downtime with stuff, people can plan around (eg pay your bill before the weekend), that's bearable.

But if the government wants you to do something (eg. declare VAT), but then the page isn't available, and they still expect you to do it, and you could get fined if you don't... that's a really shitty thing to happen.


It's a social thing. When you know the service might be down when you need it at the last moment, you just plan and do things quite some time before the deadline so you don't get caught.

In the twenties century, trains in Spain where completely unreliable: they could leave one hour early, or 3 hours late, yet people would get fired if they couldn't get to work in time. But it still worked: people just didn't run into the train station 5 minutes before departure.


"However, if companies have working hours, why couldn't websites?"

There is a UK Gov website that only works during certain times (I think it's something to do with DVLA) - although I assume that is a technical thing, rather than it being powered by solar!


Companies House used to have this. You could only register a new company 9am-5pm, Monday to Friday (and public holidays were also excluded).


> However, if companies have working hours, why couldn't websites?

B&H Photo famously goes offline every Shabbat and I think it is great. It goes to show that you can have an sustainable and profitable business without sacrificing on principles.


I just don't think it is a real issue.

It is much easier to have it from renewable sources as your DC doesn't move and the CO2 to social benefit is so huge.

Imagine giving up all mass CO2 producers, alone transport of people and goods but still being connected to everyone and sharing knowledge and learning etc.

I would like to see addicting things like reddit having office hours but being able to shop at night for something I need (like a new pair of pants after the ol d one ripped) is saving tons of CO2.


OTOH with global replication, it should be possible to keep closer to 100% uptime running on solar. We would need continued cloudy weather globally to bring every solar powered server down at the same time.

Global replication could also give us more edge computing, which puts less load on global infrastructure (assuming good weather to use the nearest server), which in turn should be able to reduce resources consumption a bit more.



That's bad, it depends on web tecnologies.

Give a check to NNCP:

https://nncp.mirrors.quux.org


I don't necessarily disagree with using web tech, but I do think store-and-forward is a lot more resilient than an always-pull model like the web. I mean, that's what a CDN is essentially: content is stored and forwarded to other CDN instances so that a user can consume it.


Yes, like exactly like that. That website may look a bit too hobbyist, but that setup could definitely be used for other websites as well.


Yup, assumptions of less than 100% connectivity and designing for that would really change the world.

While I know everyone likes to hate on Lotus Notes here, once of it's key architecture features was that it assumed a seldom-connected and replicated data model - the clients and servers would contact each other at intervals to get updates, and otherwise everything was local.

Decades later, I still miss this functionality a lot - just force and update/replicate before you go somewhere (either office-home or a trip across the world), and everything is the same. No worries about connectivity quality; if it was choppy, just give it more time to handle the retries/ecc, and get the good updates. (of course, many system managers/admins/devs didn't really consider that much of a feature and treated it like any other network app, so not taking advantage of it, but those who understood it's power...)

I really wish others would think in such a seldom-connected model, as the network is becoming increasingly brittle, and working in that model was such a joy.


>I really wish others would think in such a seldom-connected model, as the network is becoming increasingly brittle, and working in that model was such a joy.

https://nncp.mirrors.quux.org

Also, the old NNTP and email lists are still alive.

I use SLRN on batch mode on Usenet and tildex (Unix based micronetworks), and it rocks. In batch mode, I pull down all the comments/articles and I push down the answers and written articles. No need to be online 24h/7d.

For email, i use Getmail against an IMAP server.

Thus, I can fetch all the data offline at once with a 'do_the_internet.sh' script in a breeze.

I would love to use HN over NNTP.


> I would love to use HN over NNTP.

How would you represent upvotes and ranking over NNTP, or would you just drop that for comments altogether?

(I'm asking because I've been thinking of implementing something exactly like this, with NNCP also maybe, but the ranking is what I'm not sure on.)

> tildex (Unix based micronetworks)

Ooh what are these tildexes?


NNTP has scoring.

>Ooh what are these tildexes?

https://tildeverse.org/


FWIW the article is from 2018, here's the followup from 2020: https://solar.lowtechmagazine.com/2020/01/how-sustainable-is...

Notably wrt uptime:

> Uptime

> The solar powered website goes off-line when the weather is bad – but how often does that happen? For a period of about one year (351 days, from 12 December 2018 to 28 November 2019), we achieved an uptime of 95.26%. This means that we were off-line due to bad weather for 399 hours.

> If we ignore the last two months, our uptime was 98.2%, with a downtime of only 152 hours. Uptime plummeted to 80% during the last two months, when a software upgrade increased the energy use of the server. This knocked the website off-line for at least a few hours every night.

And while they found inefficiencies in their energy conversion system,

> One kilowatt-hour of solar generated electricity can thus serve almost 50,000 unique visitors, and one watt-hour of electricity can serve roughly 50 unique visitors.


> However, if companies have working hours, why couldn't websites? What a funny thought.

You may be surprised to hear that some websites _do_! I was trying to search for the page but can't find it at the moment, but I've experienced this with a Dept. of Homeland Security page that would only be available during certain hours. I believe it was due to some batch-processing related task but can't recall exactly now.


In Japan many websites still go offline every night "for maintenance" for most of the night.


Any idea as to why. I mean it's a create time for maintenance, but every night seems like a bit much.


So the load balancers would still need 100% reliability? If they go down your browser can't connect to anything which would render you unable to send the "nice message".

I think we'd have to have a pretty different internet in this hypothetical world.


That's how most of the internet operated in the 2000's, before SRE & DevOps really took traction (practices were already there somehow, but not pushed to that extent for web sites/apps).

And we were not so much worse than today (I'd say even better, but that may be loosely coupled to 100% reliability - although FOMO also has a taste of it).


> if companies have working hours, why couldn't websites

B&H Photo is offline on Saturdays to keep the Sabbath.


Indeed I’m now curious what would happen if e-commerce shops had business hours?


Most of my dumb eBay purchases are made when I should be sleeping. Hmm...


Whoever has the best uptime wins.


> Less than 100% reliability is essential for the sustainability

Less than 100% reliability is going to be mandatory as part of the degrowth we'll need if we want to seriously tackle climate change. Less reliability, all the way towards offline-first, is the most sensible way to go


Actually, you can achieve 100% reliability with any kind or extent of downtime.

As long as downtime is expected and bounded, as long as the uptime planned period is actually up, you're at 100% reliability (but not 100% uptime).


I think you're being pedantic over the meaning of words; true, a website that announces being up 1 minute per year and is up 1 minute per year is 100% reliable. But in common discussions, reliable really means "up and running".

Now, maybe you're right and we should be using another word, and aim towards less uptime ?


Well, maybe that's pedantic, but reliability has a precise meaning that means hard money through service level agreements.

So yes, it's worth it to be specific over the difference between being available and being reliable (== meeting expectations, including being unavailable at times).

If I claim a 99.5% uptime SLA guarantee for my webservice and that it does, it's reliable. However, 0.5% of the time, it may be unavailable _and that's not an issue_.


As they describe being open to ideas and feedback: maybe using static compression could help. The concept is to pre-compress files (have foo.html.gz available next to foo.html) so that the web server does not have to compress on-the-fly. nginx supports it[0] and gzip_static does not appear in their nginx config[1].

It might not make a difference if nginx has a in-RAM cache of the compressed files. Otherwise, it has the potential to be pretty significant assuming most requests are made using "Accept-Encoding: gzip" and assuming the web server does not prefer other compression algorithms (eg. Brotli).

[0]: https://nginx.org/en/docs/http/ngx_http_gzip_static_module.h...

[1]: https://homebrewserver.club/low-tech-website-howto.html#full...


> maybe using static compression could help.

Maybe but that's not a trivial question to answer: the linked article is 236K, gzipping yields 88K (84 at -9, brotly results in 68), so the questions become:

* how much energy does the extra 148K (/ 152 / 168) cost, aka once you've factored in the initial handshake what is the efficiency per byte of the transfer

* how much does decompression cost on the client side

* what is the mix of clients accepting each compression algorithm (and what is the overlap)


Apache also supports this, but the configuration is a bit tricky. For all my private websites, I have a setup where the Apache config filters requests based on the accepted encoding the client sends. For each file, I have three versions on disk: the raw file, a gzipped version, and a brotli-compressed version. Apache picks and sends the correct one. Not only is the file delivered faster, you can also use a higher compression level (gzip with compression level 9 is too slow to be used on each request).


You can also use Zopfli to get a bit higher compression at the expense of longer compression times: https://github.com/google/zopfli


The amount of energy wasted due to poor caching strategies alone could power small countries. Made far worse by the current trend to maximize code readability and organization by using as many dependencies as humanly possible for even the simplest of tasks.


Made far worse by the current trend to maximize code readability and organization by using as many dependencies as humanly possible for even the simplest of tasks.

The current trend is to maximize readability in development and then to use a build pipeline to remove as much of that readability as possible and optimize for size in production.

There is only a very loose relationship between what code looks like when you're writing it and what actually pops out of a compiler or bundler to go to production. You could write something close to plain English these days (literate programming with something like Observable), with a ton of dependencies, and still get a reasonably well-optimized website after a bundler does some tree-shaking, dead code removal, transpiling and minification.

It's complicated, sure, but your complaint here is the exact reason why it's complicated. We want to easy to write and understand human-optimized code for dev, and to use that to deliver an Internet- and browser-optimized bundle to the user. The step in the middle of those is complex.


You paint a very utopian picture, but that's just not how it works in the real world. Throw a dart at a map of both startups and large companies, and you'll find full dependencies included in otherwise small scripts. Sometimes those dependencies will even get included in multiple files, because Google convinced a generation of developers that "above the fold" was the most important thing in the history of the web -- even though Google themselves abandoned the concept almost immediately.

The focus these days is on tooling instead of code, and that has resulted in fragile infrastructures serving massive amounts of code in order to provide relatively little functionality for the end user -- all because we'd rather use some extra cheap electricity than actually learn how to write highly-performant code. I'm really not sure why you view that as a positive.


I don’t get how including dependencies multiple times is related to “above the fold”, could you clarify?


The method that most frameworks and developers use to accomplish any form of "above the fold" reactivity, is to chunk portions of a page, where each of those chunks gets its own renderer, of sorts, and whatever other dependencies that particular section of code might need. It's incredibly easy to either accidentally include large dependencies within each chunk, or repeat a dependency across multiple chunks, or to accept it as a problem to throw money at -- because it's far cheaper to expand infrastructure than to increase hiring, and way more palatable to the financial team when the buck just gets passed on to the consumer. For some reason, it has never really bothered project managers that just "throwing money at it" is entirely antithetical to the point of "above the fold."


It's not about the readability of code on web pages, but the fact that web pages are unnecessarily bloated in the first place.

Even static page generators seem a bit too complicated at time.


I see this argument a lot on HN, but whenever I look at web pages (which is pretty much all day, every day) I don't see very much that could be removed without radically changing the user experience. I understand that some people don't believe reactive single page apps, that are device responsive, that have accessibility features, and that are localized for different languages are a good idea, but if you do think those things are what you want to deliver then websites aren't "bloated". They're just big. They do lots of things. The argument is less about the code and more about people railing against what the web is these days. Fair enough if that's the case, but at least make the argument that you don't like modern websites rather than try to hide it behind a claim it's about code optimization.

The only exception to that is adverts. There's a ton of code delivered in websites that isn't necessary, and it's mostly for displaying ads and tracking people so the ads can be optimized for better conversion.


I see a lot that can be removed without substantially altering the user experience. Just compare Hacker News to Reddit, or Nitter to Twitter.


Here's how to make a SPA that isn't shit.

https://instant.page/ https://github.com/turbolinks/turbolinks or turbo/hotwire/stimulus.

They are bloated as fuck. Go to an interview for any react job and they'll ask you about what to do when react fucks up the DOM with all its bullshit. Try making an HTML page that is just HTML and see how many millions of elements you have to have on the page until it starts acting like react. React/Angular/etc are just bloat.

It's like seeing a "modern" iOS app with Dependency injection and asking yourself how often this team swaps out the database on its iOS to make DI necessary. Like what do they have in the back, a Wheel O' Database and spin it every week to find out what DB they should use until next week?


Not being anywhere on stack. Had to setup and fix this demo product. SPA with login and maybe 4 simple elements. Downloads something like 10 megabytes of code... And that isn't even the backend... I question what is even going on with this one. So much JavaScript for something that doesn't even do anything special.

Makes me glad that my career focus is security not web development...


Downloads something like 10 megabytes of code.

I don't think many web developers would argue downloading 10MB of JS code is appropriate. The problem is that people who argue that websites are bloated often include images, video, fonts, and more in their numbers, and assets like that take up space. Most web apps are a lot smaller than 10MB. Even a giant one like Github Codespace is 'only' 8MB, and that's a full IDE and git client in your browser.

If you're seeing something that's got 10MB of code in it then the developers have failed (or it's a very complex app).


They won’t argue with it but they’ll certainly ship it.


The ui would have to change, but the experience doesn't need to - news sites should just be tables, not art gallerys


Art gallery is super complimentary, art galleries are generally curated. Whatever the modern web is is the opposite of taste.

It kind of makes me want to get a giant ink jet plotter and set it up at an art gallery printing tweets / tiktoks / etc and instead of a catch bin for the prints it just goes directly into a shredder.


"should be" That's a subjective opinion. What if people who read the news want to read it like an art gallery?


That stuff isn't readable. It's mostly garbage. If you don't need to make 20,000 devs work together in a chain gang, why even bother?

Here's the platforms you need if you're not 20,000 strong: https://motherfuckingwebsite.com/

http://programming-motherfucker.com/

https://github.com/brandonhilkert/fucking_shell_scripts

http://vanilla-js.com/


That stuff isn't readable. It's mostly garbage.

Arguments like this do you no favors. There are hundreds of thousands of developers using React today. They aren't all wrong.

There are legitimate reasons to use it, it works very well, and code written with it is not 'garbage'. It certainly has its flaws too of course. However, while any website that's using React appropriately could be written in vanilla JS instead, after a while the code would start to look like a poor quality React implementation. This is obvious, because what React is doing - abstracting the process of inter-component communication, and giving developers tools to solve common problems within components - is actually very common in app development. Leveraging React rather than reinventing that particular wheel is a good idea.


> Arguments like this do you no favors. […] There are legitimate reasons to use it, it works very well, and code written with it is not 'garbage'.

Did you take a look at the links OP posted? I interpret their comment very much tongue-in-cheek.


You're missing one link: https://bettermotherfuckingwebsite.com

:)


The section on images made me realize how redundant most blog post images are. Usually they are random generic images from Unsplash, Pexels, etc. and are only exist to add some color to the page. The specific image is mostly unimportant.

Would it be possible to implement a browser-level image library? Instead of a blogger using a generic Unsplash image, they just select one from the "Firefox Image Library", which is already on the user's computer. This library can be curated and optimized to keep the browser file download manageable.

Considering that images are often 70-80% of the page size, the savings would be significant.


Most of the text on corporate pages are same-ish blahblah, we might as well add that to the Firefox Blurb Library :)


Have fun explaining to everyone why their browser now takes up 20GB.


20GB would be 50,000+ properly compressed images.

I don't think we need that many...a few thousand should be sufficient. That's only a gigabyte or less.


Having properly cached links to a central repo of generic images would achieve almost the same without the download of many unused images on install


And in fact, there already exist websites that can serve generic images matched to keywords! Some of them even use neural networks to generate an image from a text description!


I was able to squeeze some more optimisation out of it:

For example: 600px-A20-OLinuXino-LIME2.png is 34,689 bytes as served by the site.

Running it through PNGOUT, OxiPNG, AdvPNG and PNGCrush (all set to lossless) reduces the filesize by 6.8% down to 32,322 bytes with no visible diff to the image.

I guess you'd need to weigh up the energy cost of compressing the image versus the cost of serving them all ~6% larger every time.


If you just want colour, why not add some clever, but simple SVG graphics?


Definitely an option, but I don't know if most bloggers know the difference between vectors and rasters.

I also think you would need to balance aesthetics and efficiency. No one will use the browser image library if all of the images are just simple graphics. You'd still want some shots of nature, cityscapes, etc.


If the image has indexed colours (which SVG's often have as well, unless you use gradients), using indexed .png's gives you huge savings.

I have an image on my blog (1920x1080), which, as .jpg uses 272kb, but as indexed .png only 76kb, which is 27% the size. I'm using ~16 different colors, I believe. And the .jpg in this comparison is pretty hard compressed already (quality level 60 or 75, iirc).

To generate this .png I use Krita, but PS should be able to do so as well. GIMP can do it too, but if you start with a 24bit graphics and need to reduce colors as first step, I got bad results with GIMP (may have been me, though). Krita did a better ootb-job for me here.


I remember back when many Medium articles I read were flooded with animated gifs for emphasis. What a mess.


> These black-and-white images are then coloured according to the pertaining content category via the browser’s native image manipulation capacities.

But then thousands of browsers are going to consume more electricity to render the images. So if we're thinking globally it would be better to do the job once on the server.


Agree. They could just add color palettes to their PNG files. The files could essentially stay the same. See: [1]

Their use of PNG files for photographs does not make sense anyway, though. If they used 1-component JPEG files (greyscale or possibly including a custom color profile to achieve the tinted effect), they could achieve similar file sizes with greatly enhanced image quality.

[1]: https://www.w3.org/TR/2003/REC-PNG-20031110/#4Concepts.Index...


> One of the fundamental choices we made was to build a static website.

The corollary of this one is : do not offer the possibility of creating a user account when it adds no value. Some analytics that no one cares about does not provide value for instance.


I have two "diskless" servers that operate as ephemeral file hosting, like a pastebin. One runs Up1, but in tmpfs. When it boots it copies everything to tmpfs and tells all logging to go to /tmp. I run that one on a t1.micro or equivalent - basically a raspberry pi. The storage is touched on boot and that's it. I never bothered to make the rootfs remount read-only, but that would be fun, too.

The other one is a message queue sort of thing, and I don't want records of the messages (there's way too many), so it's all done in ram too.

Both can be rebooted arbitrarily with a half minute service interruption - and I often reboot them because they run out of 'storage', and I go through and compile "LTS" kernels and patches for the software 2-4 times a year. The nodejs Up1 is so locked down I don't mess with node anymore, though.

If I had faster internet I would host everything on low power computers on-site. It's my favorite way to run systems.


Value to the user or value to the website owner? Analytics do provide value to the website owner.

I assume you mean value for the user, and I would agree that’s a nice goal, but you make out it’s somehow self-evident and aligned with all relevant incentives.


I love the design of this webpage, such a polished look, yet not banal or corporate. It has a strong personality. And it's responsiveness is amazing. That's what good design is about. Also, no JS bullshit. What a breath of fresh air.


Hmm, personally i think that there are a few nitpicks that could improve readability, though maybe that's in conflict with the current aesthetic that they're going for.

For starters, it would be nice not to stretch out the images too much, and instead allow them to take up just a portion of the screen, otherwise they feel overwhelming in contrast to the text.

Secondly, maybe the battery charge indicator is a bit distracting (the color difference isn't subtle with the background), though at the same time that's a really cool gimmick!

Here's an example of what i'm thinking of: https://imgur.com/a/zFYhz9p


I detest the colours with the scroll. Awfully distracting.


Interesting, new technology tends to be "faster" or "more powerful" with energy usage as an after-thought. It's refreshing to see a site dedicated to the opposite end of that spectrum.


If you break « technology » into hardware and software, you have a different story.

Hardware have never been so efficient, giving you a lot of computing power for very little energy usage.

What is questionable is what we do with this power. Basically we use it to run our badly written garbage collected programs into layers and layers of VMs.


It feels that there has been two trajectories. Hardware has gotten more powerful and more efficient. While software has become less efficient. Not that it doesn't do in some cases more things like bigger resolutions and such, but in many others it feels like we are doing the same or less while having much more computing power available.

Just compare what software did 10 or 20 years ago today, and it seems that we haven't really progressed so much. Even while we have massively more memory and computing power available...


I'm totally in this boat. I can't understand how basic things like chatting (Slack, Teams, Discord ...) needs so much power to do, basically the same thing (plus artifices) MSN or IRC did 20 years ago.

I'm not saying that we have not better software but please, there wouldn't even be enough RAM to run Slack on the computer I used to chat with hundreds of people.


I don't think that's necessarily true given the rise of mobile computing. We definitely had to optimize for performance, RAM, and energy usage with mobile phones. The M1 for laptops is a direct benefit of those optimizations, where Apple has essentially shown that such power optimizations for battery devices can be useful even for desktop class computing.


That is completely and absolutely false. Newer mobile devices are not getting weaker, they are becoming more powerful and have processors that are often more powerful than older desktop or laptop processors. These optimizations aren’t being made because newer mobile devices are much more powerful. The new iPhone has 6GB of ram, proof the direction is that nobody will care about optimization and will create and use more bloated toolkits and CSS, and adding more analytics.


I for one can't wait to run the in-browser regression app I've been working on in JS at the M1 mac I just ordered... a stack that would probably run faster on my old 2015 box if I'd written it in C. Basically proving the OP's point that optimization tends to decrease when you can write easier code and just throw more cycles at it.


My point isn't that optimization tends to decrease as we get more powerful. My point is that we don't simply scale our use cases towards more resource-hungry applications.

Mobile phones prove my point. As processors became more powerful we didn't simply make them bigger and more power hungry, instead we sought out use cases where we could make the devices smaller and more portable, thereby using the performance enhancements to miniaturize computers rather then simply scaling them up.


Yes, what you're saying is absolutely true for hardware. I guess I'm referring to the software that's written to take advantage of each platform. It always tends toward bloat, so no matter how much faster the platform is, pretty soon there are apps in the wild that push its limits. And often, these apps could be written in more efficient ways, but the existence of the extra power justifies coding further and further from low level languages. That's all I meant.


Fundamentally, it comes down to the "human REPL". To a surprising and perhaps unsettling degree, no one really understands what's going on in the computer, we just have to infer what's going on by making changes and observing. So the human being is sitting at his computer making changes and refreshing to see what's changed, and he can only see lag and performance issues that are apparent on his machine. Everything else (not UI) is invisible to him. If the computer did computation instantly, there would be no real way for him to know (much less incentive to know) the performance or purely "mechanical" difference between, say, subtracting 1 by counting down 1 and subtracting 1 but counting up from 0 until the "next" number is equal to the "current" number. Weird concept, huh?

P.S. I've just replied to your very excellent post from 6 days ago.


>> (much less incentive to know)

Just riffing here, but I think a lot of times we do optimize just because it seems like doing good work, polishing things up. If it would feel cleaner, faster, and if we have time, we go back and improve it. Sometimes we even test that concept by running some piece of code millions of times even if, in practice, there is virtually zero speed difference. And I think we probably do that because we want to really know our tools better and more closely than just having the divorced sense that the computer is doing something we don't completely understand, under the hood - shaving off the milliseconds of difference between .map and .foreach and for(let...) or reducing what you need to some arcane series of byte array operations is what does give us the sense of control, that we're not just functionaries in a big REPL. I mean, I do take the time to optimize when I'm approaching a tricky problem as a matter of principle, not as code golf. But the underlying principle holds that, as you said, when lag for a given program approaches zero, so does the incentive to improve it. If I were still writing programs in BASIC on a TRS-80, you can be sure I'd optimize a whole lot more. So when you stick hundreds of such programs together into a framework, and each has its own authors, the collection or platform will tend toward maximizing the available hardware until the very last program, the new one, the one you're trying to write, has to find a way to optimize. And then it will optimize just enough.

I guess you could phrase this as: Software tends toward filling or exceeding hardware capacity over time.


Yes but they're doing it from a political angle. We should be building simpler, snappier, and more resource efficient software because it's the right thing to do and it makes software better.


I haven’t read anything political on there. Why is wanting to reduce energy consumption „political“. It’s just plain old frugalism mixed with environmental concerns.


Because if you actually wanted to reduce energy consumption, then you'd have actual targets and goals. E.g. I started out at n mW and now I'm at m mW of usage. This article talks about the solar setup they're using now, but not where they started from, and what the energy usage profile of their server is over time. In other words, it's either lazy or greenwashing.


The only JS I need on my blog is Mathjax. Is there a static site generator that can render latex equations into svg's automatically?

On the one hand, I want to get rid of JS. On the other hand, I prefer to keep the plain text equation in my markdowns, rather than generating and linking to an image.

I feel like I'm one long weekend away from building my own SSG to do this.


GladTex [0] converts latex math in html files to SVG images with links. If you prefer to write your source webpage in markdown, you can also use it with pandoc [1] like this (from the manual):

~~~

pandoc -s --gladtex input.md -o myfile.htex

gladtex -d image_dir myfile.htex

# produces myfile.html and images in image_dir

~~~

[0]: https://humenda.github.io/GladTeX/

[1]: https://pandoc.org/MANUAL.html#math-rendering-in-html


> The only JS I need on my blog is Mathjax. Is there a static site generator that can render latex equations into svg's automatically?

KaTeX. https://katex.org/docs/api.html#server-side-rendering-or-ren...

Though not to SVG, see https://github.com/KaTeX/KaTeX/issues/375 tracking the progress.


Is there a reason why you do not want to use MathML for embedding equations in HTML? If not, it might be a good alternative to SVGs because MathML equations can be read aloud by screen readers. (If the screen reader is good.)

MathJax, which uses MathML internally, claims to improve accessibility also for users of "bad" screenreaders. [1]

Apparently, you can use pandoc to convert markdown files with LaTeX equations to HTML with embedded MathML using "pandoc math.text -s --mathml -o mathMathML.html". [2]

[1] https://docs.mathjax.org/en/latest/basic/accessibility.html#...

[2] https://tex.stackexchange.com/a/352636


Learn groff, pic and eqn under Linux/Unix and use groff to export to HTML5.


The author had me right up until “However, visuals are an important part of Low-tech Magazine’s appeal, and the website would not be the same without them.” And the next image was like hmmmm that image is smaller and dithered but was it really needed? If you really cared about the environment you would have just gone without that image and saved 100% of the file size. And then should I have children? Each one will consume more resources might as well have zero. And then wait suicide would get rid of me? Where do you draw the line to still live but be good?


>Where do you draw the line to still live but be good?

You draw it at making the image files a bit smaller, if the article's anything to go on. Not every concept needs to be extrapolated out to the point of a suicide pact. Let good enough be, and try for more tomorrow.


Looks like there was an update in 2020 on the experience of running this server: https://solar.lowtechmagazine.com/2020/01/how-sustainable-is....


The claim that the last 5% can only be provided by either higher embodied fossil fuel costs in the batteries OR by burning fossil fuels wouldn’t be true, BTW, if you used fuels which are non-fossil to provide that last 5% of energy. It’s problematic to have a sort of alchemical understanding of energy that it HAS to at some point come from fossil fuels OR you have to significantly reduce capability somehow (in this case, website reliability).

In fact, the US didn’t use hardly any fossil fuels (virtually all wood or water) until well after the Industrial Revolution (late 1700s to 1820). Even for ironmaking, the US primarily used charcoal (not coal) until around the 1840s, after the first Industrial Revolution, and coal didn’t exceed wood for fuel in the US until the mid 1880s. American industry (and much of Britain’s) was powered by water power, channeled to different buildings, etc. American trains and steamboats ran on biomass. Most US iron used to build these things was made using biomass. (Britain was different because they simply didn’t have as much land and had cut down their trees already.) Sustainably harvested biomass (or electrically synthesized fuels like hydrogen) is a valid source for that last 5%.

Ultimately, there’s a large energy cost to non-reliability, BTW. Any kind of real automation (which mass production—and thus abundance and solar panel production—relies on) relies on its components being highly reliable. Non-automated handcrafted everything means large embodied labor, which means energy usage from human metabolism.


Charcoal may not be fossil, but it still releases CO2 into the atmosphere, right?


Sort of. Charcoal is made from wood; the released carbon was generally extracted from the atmosphere by the tree.

To parent's point, in pre-industrial and early-industrial times, most of this would have come from pre-existing trees cut for the purpose. This will prematurely release that carbon. If it had come from farmed trees, like it might today, then it could be net zero with respect to CO2 over the whole cycle. The "could" is important there, as the process of turning wood into charcoal doesn't need to be carbon neutral (though it probably could be).


It’s worth pointing out that I live in an area where pre-industrial times had all the trees cut down, and it’s full of dense trees now, so all the CO2 released by that charcoal has been recaptured. Even non-sustainably-harvested biomass can regenerate about 6 orders of magnitude faster than fossil fuels do…

Fast growing trees could regenerate in 10-20 years. Crops can regenerate in a few months to a year. (Ignoring regeneration time can screw up return on energy calculations, BTW…)

I think relying primarily on biomass for energy is a bad idea, but for that last few percent, it’s not bad.


I can’t help but say that even WordPress can be served as a "static site" if caching is done right. And maybe the fact that I use an ad-blocker is saving a few watts per week.


I use the LiveCanvas builder, and it does just that, output static files on WP. https://livecanvas.com/


Lose the dithering and use a sane image format.


I find the use of dithering fairly ingenious, even though the result looks like a printed newspaper.

Essentially the goal for him was to achieve the best possible energy efficiency (CPU, bandwidth) and he came up with this idea. I'm pretty sure WebP would be much more CPU intensive.


The problem with dithering is that it is added noise (read: entropy) that needs to be preserved in the image file, which can make the compressed image file bigger. You can actually achieve quite good results by just cranking down the quality on a jpeg. You'd have to go really low quality before it looked as bad as these dithered images. Also, jpeg supports greyscale, although you might find that keeping the colour in there really doesn't add much to the file size at all anyway, especially if you set an appropriate chroma subsampling ratio.


JPEG looks subjectively better at the same bitrate and is probably hardware-decoded on many platforms.

Unfortunately I lost the link for the comparison.


I disagree: keep it.


What's not sane about dithering? It has been used for a long time in print


The dithered images on the site are approximations of continuous tone images. But dithered images of this sort are inefficient approximations, so the page is actually wasting bytes compared to just using plain old JPEGs with appropriate size/quality settings.


While "needed" to produce "nice" (acceptable) looking 4-level images, it's likely breaking the filtering and compression steps of PNG format compression, making the image larger...


Would using GIF help? Then a 4-level image would use only a single byte for each pixel, and the entire file can be compressed if necessary.

For example, here is the comparison of the first image on their page (file sizes in bytes), converted to gif with 4-level:

25700 sps_close.gif

43415 sps_close.png

[EDIT] Interestingly enough, converting their image to a 7-level (1 for alpha) gif image gives you double the colors while reducing the image size by 17,715 bytes (~17.2kB)


In the general case you aren't going to beat a lossy compression specialized for photos (JPEG) with a lossless compression trying to compress something you've added noise to (PNG / GIF) when trying to compress a photo


What does the general case have to do with it? At the target visual quality I doubt that jpg offers any size saving over a gif.


I'm completely with you on that. WebP should be the smallest, best supported solution right now.


WebP is a terrible practice and and it has no real value vs jpg https://pagepipe.com/dont-use-webp-image-format/ - just because endorsed by google doesn't mean it's good (i would even argue, the contrary is true)


> Cloudinary says we can reduce that image’s page weight from a 4.6k PNG to a 1.5k webP image. That saves 3.1k. In big letters, they tell us that is a 32-percent savings.

I'm not a mathematician but...


If we're comparing lossy compression mode, WebP is close to equal to MozJPEG for higher res images. Check the 1500 px comparison: https://siipo.la/blog/is-webp-really-better-than-jpeg


This article seems outdated. Webp supported on Safari. Also this article itself confirms that webp saves image size, just author for some reason thinks that it does not matter. Well, this page is not so fast to open for me, definitely not 0.5 second, more like 5 seconds, so I wouldn't be the one listening to his advices.


> Well, this page is not so fast to open for me, definitely not 0.5 second, more like 5 seconds, so I wouldn't be the one listening to his advices.

its hosted on a raspberry pi and is front page of HN...


I'm pretty sure the parent poster is talking about the page that says you shouldn't use Webp, not about low-tech machazine. The former claims to load in 0.5 seconds, the latter doesn't.


doh! I hate it when i end up viewing threads out of context :/


> This article seems outdated.

Amazingly enough the page seems to be updated very recently ("November 2021", which was not present back in March [1]).

[1] https://web.archive.org/web/20210305192950/https://pagepipe....


their argument is that filesize doesnt matter because images download in parallel... thats just idiotic. 10x 100MB images downloading in parallel is still going to take longer than 10x 1kB


This page says that webp produces smaller images…


Just use AVIF with a JPEG fallback...


AVIF is still in process of adoption [1], while WebP is virtually everywhere and you can even ignore fallbacks if you don't support MSIE [2].

Also while AVIF supports lossless images, its lossless compression is known to be weak and can be even inferior to PNG, while the WebP lossless mode is a dedicated algorithm and almost guaranteed to be better than PNG. This is a reason why JPEG XL can be better than AVIF in the long term, where AVIF is just another intra-frame format extracted from video codecs while JPEG XL is explicitly built to support both use cases.

[1] https://caniuse.com/avif

[2] https://caniuse.com/webp


Could probably host 10000+ static websites on a Mac m1, low power usage and built in UPS...


What does low tech means in this context ? It's still a computer, with complication on top: solar panels, battery.

All the hardware they use requires some energy to be built, the network consumes a lot of energy too.

How does this compare to a VPS server at a cloud provider or using the grid electricity (no solar panel) ?

I'd expect individual VMs to use less energy compared to a decicated computer.

Is it really greener to stop using the grid electricity but instead buy a small solar panel ? Surely that'll be less green than the grid 20 years in the future.

The article contains many sources but doesn't compare other solutions. 2.5 watts is impressive, though.


Low tech in this case means off grid.

Imo low tech does not have to mean low availability, low tech website to me is a static website hosted on object storage with some cloud based CDN infront.


Yeah, but how green are some of these CDNs?


Well then it's a green website. The low-tech part in the title is misleading because it can mean so many things to different people. In this case it's actually green, off grid but complex and relatively high tech for the setup process.


Pretty green. An advantage of running very large infra is that it makes sense to spend engineering time optimizing energy use, even in a fully profit-driven sense.


> All resources loaded, including typefaces and logos, are an additional request to the server, requiring storage space and energy use. Therefore, our new website does not load a custom typeface and removes the font-family declaration, meaning that visitors will see the default typeface of their browser.

This is just nonsensical. Inline styles requires no extra request, and you can still use the fonts 99.9999 % of users are guaranteed to have.


While reading this article I have started a total of 29 ci/cd workflows, some of which are expected to run for 30 minutes probably consuming at least 200 watts (assuming some kind of xeon computer) each for the duration, I have consumed an estimated 1700Wh (19 * 10 minutes, 10 * 30 minutes), or enough to run this website for between 28 - 71 days. Not sure what to think about that.


> Not sure what to think about that.

That right now all aspects of our lives are wasteful and that society won't or can't change fast enough.


One past thread and a few fragments:

How to Build a Low-Tech Website? - https://news.ycombinator.com/item?id=18293409 - Oct 2018 (1 comment)

How to Build a Low-Tech Website - https://news.ycombinator.com/item?id=18075143 - Sept 2018 (183 comments)

How to build a (solar-powered) low-tech website - https://news.ycombinator.com/item?id=18073850 - Sept 2018 (1 comment)

How to Build a Low-Tech Website - https://news.ycombinator.com/item?id=18057331 - Sept 2018 (1 comment)


I wonder what percent of total energy use of viewing the page around the world is from the web server and what from the routers and network infrastructure and rendering on the browser?

This is an interesting experiment, but I struggle to see how a 'low-tech' under utilised website would consume less energy in its lifetime (including manufacture, delivery etc) per page view than a high performance, globally cached (CloudFront/Flare etc), highly utilised shared web service in a data centre.


Downtime can be avoided by using UPS which can be charged when solar energy is available.

Another alternative to Text Only logo would have been to use SVG logo which can be achieved in less than 1.5 KB.


They do have a battery, of course they have a battery. But they want to keep it sustainable, which a large battery is arguably not: https://solar.lowtechmagazine.com/2020/01/how-sustainable-is...


I just wonder how much energy does it take to serve content over TLS/SSL. Maybe that's another factor to consider when building a "low tech" website.


This feels a bit like counting the amount of coffee you drink when calculating carbon footprint, or a bit like code golf aiming for 1KB of executable size.

Makes more sense to do the 80/20 approach, 20% of work to get 80% of results sustainability wise and the move on to the next.


You could also take advantage of economy of scale and host your static web site on a CDN - together with thousands of other web sites. And pick a CDN that runs entirely on "green" energy.


As cool as running the website on solar power is, the thing I really like about this website is its simple design, lack of javascript, fast page load, snappiness etc.


Function follows constraints.


The images are still huge for the quality. There's a lot of gains to be had with proper compression here.


Would be fascinating to see a similar tips & tricks breakdown for minimum-viable 3D content.


Seems also like a perfect usecase for ipfs. Imagine solar powered ipfs nodes, each serving some website and also pinning for other nodes. but its probably hard to get an ipfs node to 2 watt...


> We were told that the Internet would “dematerialise” society and decrease energy use. Contrary to this projection, it has become a large and rapidly growing consumer of energy itself.

The writer lost my trust with that logical error.


In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons' effect) occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand.[1] The Jevons paradox is perhaps the most widely known paradox in environmental economics.[2] However, governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising.[3]

https://en.wikipedia.org/wiki/Jevons_paradox




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: