Hacker Newsnew | past | comments | ask | show | jobs | submit | hnlmorg's commentslogin

Yeah, I do see what you mean.

I wonder if the unspoken “paradigm” shift is the distribution was vibe coded.

There’s a lot of contradictions on the landing page that would easily be explained by either kids writing it, or someone vibecoding the site.

Such as their claim that updates are a “single iso”, and also their claim about a single App Store, and they then go on to discuss flatpak and homebrew package management.

Or their claim to have redesigned the desktop from the ground up, while boasting they run KDE/Plasma.

And there’s also the claims that it brings something totally new while then going on to describe core Linux features.

Also the scripts running “non intrusively” yet that’s just what you’d expect any seasoned admin to do. This isn’t a headline feature unless you’re new to the game.

Good luck to the guys. I hope they enjoy the exercise. But this is definitely a hobby project cosplaying as a serious distro


I'm not sure where some of these "contradictions" come from, as I e.g. can't find anything about them having "redesigned the desktop" on the page with those keywords. But for the rest, I don't see how they are contradictory - at least if you've spent a few seconds to understand them.

> Such as their claim that updates are a “single iso”

Updates literally are a "single image" (didn't see "iso" mentioned). Where is the contradiction?

> and also their claim about a single App Store, and they then go on to discuss flatpak and homebrew package management.

There literally is a single app store. Homebrew is not used to install apps, only for CLI tools. Flatpak is the single app store which users use to install apps (through Bazaar). Where is the contradiction?

> And there’s also the claims that it brings something totally new while then going on to describe core Linux features.

Can you explain what exactly you're referring to?

> Also the scripts running “non intrusively” yet that’s just what you’d expect any seasoned admin to do. This isn’t a headline feature unless you’re new to the game.

This distribution isn't targeted at "seasoned admins", so why wouldn't they mention something relevant to their target group? No contradiction here.


> on the page with those keywords

Yeah I was typing from memory on phone. So the citations aren’t going to be verbatim.

> Updates literally are a "single image" (didn't see "iso" mentioned). Where is the contradiction?

Because that’s not how homebrew works. And you can’t have a single image if you’re expecting people to install apps via their multiple different endorsed delivery mechanisms.

> There literally is a single app store. Homebrew is not used to install apps, only for CLI tools. Flatpak is the single app store which users use to install apps (through Bazaar). Where is the contradiction?

Because an App Store is ostensibly just a package manager. I get they’re making a distinction between desktop apps and CLI (homebrew does GUI apps too by the way), but when their emphasis is on “easy” and “one way to do things”, having two different ways to install apps contradicts their mission statement.

If they actually cared about this mission statement AND had half the competence they claim, they’d build a unified UI that supports all use cases rather than expect people to learn those different tools and why it matters that they’re different.

> Can you explain what exactly you're referring to?

“Aurora is a paradigm shift for Linux. To rethink the Linux Desktop experience from the ground up, we built Aurora on new technology and principles.”

Bazaar, Plasma, homebrew, etc. none of this is unique to Thor distribution.

They also boast about being able to rollback updates. That isn’t new to Linux either. Though I’m willing to give them the benefit of the doubt that they’ve created a smoother default experience here.

> This distribution isn't targeted at "seasoned admins", so why wouldn't they mention something relevant to their target group? No contradiction here.

i didn’t say thy are targeting seasoned admins. I said seasoned admins would take for granted that’s how you’d write that code. So wouldn’t even consider it something to announce.

The only reason you’d announce it would be because you hadn’t worked in this space before and feel a sense of achievement doing the bloody obvious. (And to be clear, I have zero issue with people having projects like these to learn new skills)

Also, I clearly didn’t say “literally everything was a contradiction.”

I am interested who you think this is targeting. Because they do specifically say this is for developers (amongst other people). And the reason they give (VSCode) is a pretty noob argument. If you can’t figure out how to install an IDE then you’re clearly tech savvy enough to be a developer.


the updates being a single image has nothing to do with homebrew. The OS is a single image that gets updated, that 100% the same that every user will get daily or weekly (depending on what branch/stream you are on).

Homebrew or flatpaks don't pollute the base image


I get that. But my point is if you’ve got 100+ bits of software installed via homebrew and flatpak, then it’s a bit of a stretch to say updates are a single image.

I’m sure there is a reason for their design but the messaging is all over the place. They boast about things that you should expect to happen (like testing packages before releasing - even bleeding edge distros do this) and throw superlatives around with little substance to back them up while quoting pretty run-of-the-mill choices like KDE and VSCode. It leaves an overall impression that the people behind it can’t be taken to seriously.

If that’s unfair then I’m sorry. But it’s their job to convince me that I should trust them with something as important as an OS. It’s not my job to give them the benefit of the doubt.

If that distro is even just half as good as it claims, then they need to seriously redesign the entire landing page to be more focused on what those gains are. And I say this as someone who's ran several open source projects myself and has immense difficulties designing landing pages for them. I know it's a hard thing to get right. In fact I think it's actually harder than creating a new distro.


> Because that’s not how homebrew works. And you can’t have a single image if you’re expecting people to install apps via their multiple different endorsed delivery mechanisms.

As the other poster said, Homebrew has nothing to do with this. Please read up on how the technology works before declaring this a contradiction.

> Because an App Store is ostensibly just a package manager. I get they’re making a distinction between desktop apps and CLI (homebrew does GUI apps too by the way), but when their emphasis is on “easy” and “one way to do things”, having two different ways to install apps contradicts their mission statement.

You don't install the same things using Homebrew and Flatpak. You install apps through Flatpak, and non-apps through Homebrew etc. There aren't two ways to install apps.

Are you referring to "casks" when talking about GUI apps through Homebrew? Is that even supported on Linux?

> If they actually cared about this mission statement AND had half the competence they claim, they’d build a unified UI that supports all use cases rather than expect people to learn those different tools and why it matters that they’re different.

No, you're just arbitrarily asking for them to make changes based on your misunderstandings of the use cases of each tool.

> The only reason you’d announce it would be because you hadn’t worked in this space before and feel a sense of achievement doing the bloody obvious. (And to be clear, I have zero issue with people having projects like these to learn new skills)

No, that's not the only reason, but you're looking at the project with an extremely narrow lense while not spending any time actually looking into the technology and project, so I can understand that it's the only reason you see.

> I am interested who you think this is targeting. Because they do specifically say this is for developers (amongst other people). And the reason they give (VSCode) is a pretty noob argument. If you can’t figure out how to install an IDE then you’re clearly tech savvy enough to be a developer.

If you'd spend 5 seconds reading up on the technology, you could easily steelman a better argument.


> You don't install the same things using Homebrew and Flatpak. You install apps through Flatpak, and non-apps through Homebrew etc. There aren't two ways to install apps.

except from a user perspective there is. You have to first consider what type of app you want, and then search for it using the correct package manager.

As I said, if they had a single UI that managed both flatpak and homebrew, then it would be different. Users shouldn’t need to know which technology was used to download and install a particular package - that's a technical distinction that should be abstracted away by the "App Store".

Now I completely understand why they've taken the approach they have. But they've made a technical decision to fragment the UX while advertising the app store for its simplicity.

> No, you're just arbitrarily asking for them to make changes based on your misunderstandings of the use cases of each tool.

I'm not asking them to make any changes and I definitely do not misunderstand these tools (fun fact: I maintain a few open source projects -- so I'm probably more familiar than most with how brew et al actually work).

I'm simply pointing out how their advertising doesn't gel with the reality of the UX they're providing. It is feedback, not a request nor demand.

But for what it's worth, if they did decide they wanted to look into the possibility or a "single pane of glass" for all app management, then KDE already has a tool that might work here and which already supports pulling from different sources via extensions: Discover (https://apps.kde.org/discover). So it might be worth them taking a look at the viability of use that (again, just feedback, not a request).

> No, that's not the only reason

That’s not a rebuttal. It’s just a contradiction.

> you're looking at the project with an extremely narrow lense

I’m really not. I’m comparing it against my 30 years of professional experience with Linux (and UNIX as a whole) administration and highlighting areas where their docs are coming across as amateurish.

I’m open to being proven there there is more going on than appears, but your replies amount to “you’re wrong” without actually providing any detail why.

I run Linux workstations and because I don't get paid for keeping my workstation up to date, I do look for something that's as low-effort to maintain as possible. So it's quite possible I'm the target audience for Aurora. But the project does such a poor job of explaining why I should use this instead of any of the hundreds of other distros.

This isn't me being narrow-minded because, as I said elsewhere, it's their job to convince me that I can trust them with my hardware and my sensitive data. And their site, in it's current state, doesn't do a good job of that. In it's current state, it feels like it's being managed by people who don't have a whole lot of experience in this field.

But as I also said elsewhere, I know better than most just how hard it is to get a landing page right for a project as complex as an OS. So I'm being critical from a place of empathy rather than dismissiveness.

> If you'd spend 5 seconds reading up on the technology, you could easily steelman a better argument.

I was asking you a question. There’s no need to be confrontational with me.


In my experience, it’s almost always the right wing parties who harm working class while supporting their own.

They just do a fabulous job of convincing the working and lower classes that they’re “one of the people” while shifting the blame onto other people (immigrants, disabled, anyone who wants a living wage from their 40+ hour job, etc).


Left and right are just sides of the same cleptocratic coin, catering to different audiences but ultimately doing the same thing.

Because wealth inequality and housing unaffordability has increased regardless if left or right wing were in power.

There's no good and bad one here, they're both just cosplaying.


I’m don’t think either the Uk or the US have had a properly “left” party in power. They are just a cosplaying, as you say. But that doesn’t mean that left wing parties don’t exist.

What exactly is a "properly left party" that we don't have?

Is that like the meme on how communism isn't bad, because we never had "real communism"?


No. More like central or Western European parties. Or Green in the UK. Most left-wing politicians in America would seem right wing in, for example, Netherlands.

I think bringing communism into the discussion around left wing parties is as daft as saying all republicans or Tories are Nazis.

The problem with the UK and US is we’re so used to right wing policies that anything moderately left is considered “extreme”. There’s no nuance left because people are closed off to it. (And to be fair, may left wing folk don’t help when they call their right wing peers “racists”. There definitely needs to be more tolerance on both sides)


>No. More like central or Western European parties.

That couldn't be more vague. That's like saying I want a car like the ones in that parking lot over there.

You have no idea that some parties in Poland, Hungary or Romania would make Donald Trump look left wing.

When I asked you what type of left parties you claim are lacking, I expected to hear the exact policies you want but are lacking, not pointing at random parties that not everyone knows.

And we've have enough left wing and green policies in Europe since they're the ones who championed the "refugees welcome" open borders problem, gas dependence on Russian gas and denuclearisation.


> That couldn't be more vague.

That’s why I then gave examples ;)

> You have no idea that some parties in Poland, Hungary or Romania would make Donald Trump look left wing.

You’re now talking about nationalist parties.

But yes, I do agree that there are right wing leaning counties in Central Europe as well.

To be honest, it feels like the whole world is heading that way right now.


This. 100 times this.

Forgive my ignorance here but if you want to write HTML then what do you gain from a static site generator?

Couldn’t you just ‘cat’ your templates together with a shell script?


Starting with clean valid semantic HTML makes it a whole heck of a lot easier to preview in a web browser or editor with a preview feature and gives you quite a few editing options. Granted, there are now live markdown previews in some editors, so this is less of a concern than it was. However, you can easily toss in some CSS in there to make things a little nicer, while the typical markdown preview is going to look like Netscape 2.

As for the templates... those are also HTML. You're just replacing the relevant part of the template's DOM with what you pulled from the source document. Same goes for any boxes on the page you need to stuff with generated content. Your index pages and blog lists are generated from the metadata and other items pulled from the relevant parts of the source documents using the favored html processsing library of the week.

edit: I think I did a terrible job answering your question in my initial reply.

Ultimately, a static site generator is doing what was the way that SGML was envisioned to function... you started with a simpler authoring document and passed it through a processing pipeline that generated a richer SGML document that was eventually output to some sort of output presentation form. My take is that instead of using yaml and markdown for the source documents you just use semantic html, and that templates just use everything that WHATWG has given us with modern HTML instead of that plus a template language.


Why would you want to mix Windows and Linux processes into one workflow on the same host?

I’m sure you’ve encountered a very niche problem that requires it, but I cannot think of any scenario where that kind of behaviour would be desirable vs splitting those workflows up.

Are you not able to have separate Windows and Linux hosts (eg VMs or containers) that are instigated to run in parallel as part of the same pipeline, but don’t rely on the same processes running in the same host?

Or at the very worst, use a TCP/IP based RPC to share state between the different hosts?


We're using OBS for building stuff (and heavily abuse it for the Linux side already) - a few things (like reverse dependency builds) make that more useful that most of the other stuff out there on complex projects. So when the requirement for some Windows builds came along (which are tiny compared to all the other stuff we're doing) we just ended up using WSL to have Windows workers in OBS. Also has some other advantages with how our cmake builds work (short version, developers can do their own bit in visual studio, and then a bit more checks run on CI where we can reuse the usual stuff without caring about Windows)

What’s OBS? I’ve only seen it in reference to the video streaming (et al) tool. Guessing that’s not why you’re talking about here though?

The Opensuse Build System. It's pretty good at figuring out if something needs rebuilding, so in some cases violating it and making it do stuff it's not supposed to is a sensible choice.

That sounds more like Desktop Support than a SysAdmin role. My condolences if that's the job you landed when interviewing for a SysAdmin role

I get the GPs suggestion is non-conventional but I don’t see why it would cause caching issues.

If you’re sending over TLS (and there’s little reason why you shouldn’t these day) then you can limit these caching issues to the user agent and infra you host.

Caching is also generally managed via HTTP headers, and you also have control over them.

Processing might be a bigger issue, but again, it’s just any hosting infrastructure you need to be concerned about and you have ownership over those.

I’d imagine using this hack would make debugging harder. Likewise for using any off-the-shelf frameworks that expect things to confirm to a Swagger/OpenAPI definition.

Supplementing query strings with HTTP headers might be a more reliable interim hack. But there’s definitely not a perfect solution here.


To be clear, it's less of a "suggestion" and more of a report of something I've come across in the wild.

And as much as it may disregard the RFC, that's not a convincing argument for the customer who is looking to interact with a specific server that requires it.


Cache in web middleware like Apache or nginx by default ignores GET request body, which may lead to bugs and security vulnerabilities.

But as I said, you control that infra.

I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

If you can’t trust them to do that little, then you’re fuck regardless of whether you decide to send payloads as GET bodies.

And there isn’t any good reason not to contract pen testers to check over everything afterwards.


> I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist: "content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack" (RFC 9110)

> And there isn’t any good reason not to contract pen testers to check over everything afterwards.

I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.


> Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist

No. The correct way to set up this infra is the way that works for a particular problem while still being secure.

If you’re so inflexible as an engineer that you cannot set up caching correctly for a specific edge case because it breaks you’re preferred assumptions, then you’re not a very good engineer.

> and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack"

Once again, you have control over the implementations you use in your infra.

Also It’s not a RSA if the request is supposed to contain a payload in the body.

> I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.

I wouldn’t be so sure. I’ve worked with a plethora of different infosec folk from those who will mandate that PostgreSQL needs to use non-standard ports because of mandating strict compliance with NIST, even for low risk reports. To others that have been fine with some pretty massive deviations from traditionally recommended best practices.

The good infosec guys, and good platform engineers too, don’t look at things in black and white like you are. They build up a risk assessment and judge each deviation on its own merit. Thus GET body payloads might make sense in some specific scenarios.

This doesn’t mean that everyone should do it nor that it’s a good idea outside of those niche circumstances. But it does mean that you shouldn’t hold on to these rigid rules like gospel truths. Sometimes the most pragmatic solution is the unconditional one.

That all said, I can’t think of any specific circumstance where you’d want to do this kind of hack. But that doesn’t mean that reasonable circumstances would never exist.


I work as an SRE and would fight tooth and nail against this. Not because I can’t do it, but because it’s a terrible idea.

For one, you’re wrong about TLS meaning only your infra and the client matter. Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic. The one I saw was Bluecoat, no idea if it follows your expected out-of-spec behavior or not.

For two, this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement. If you move to AWS, do ELBs support this? If security wants you to use Envoy for a service mesh, is it going to support this? I don’t pick all the tools we use, so there’s a good chance corporate mandates something incompatible with this.

You would need very good answers to why this is the only solution and is a mandatory feature. Why can’t we cache server side, or implement our own caching in the front end for POST requests? I can’t think of any situations where I would rather maintain what is practically a very similar fork of HTTP than implement my own caching.


> Not because I can’t do it, but because it’s a terrible idea.

To be clear, I'm not advocating it as a solution either. I'm just saying all the arguments being made for why this wouldn't work are solvable. Just like you've said there that it's doable.

> Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic.

I did actually consider this problem too but I've not seen this practice in a number of years now. Though that might be more luck on my part than a change in industry trends.

> this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement.

I would imagine if you were forced into a position where you'd need to do this, you'd be able to address those underlying limitations when you come to the stage that you're re-implementing parts of the wider application.

> If you move to AWS, do ELBs support this?

Yes they do. I've actually had to solve similar problems quite a few times in AWS over the years when working on broadcast systems, and later, medical systems: UDP protocols, non-standard HTTP traffic, client certificates, etc. Usually, the answer is an NLB rather than ALB.

> You would need very good answers to why this is the only solution and is a mandatory feature.

Indeed


There is no secure notion of "correctly" that goes both directly against specs and de facto standards. I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely. I personally don't even know which L7 balancer my company uses and how it would cache GET requests with bodies, because I don't have to waste time on such things.

Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation. Just don't make your life harder for no reason.


> There is no secure notion of "correctly" that goes both directly against specs and de facto standards.

That's clearly not true. You're now just exaggerating to make a point.

> I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely

You don't need application-layer support to load balance HTTP traffic securely and timely.

> Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

I didn't say PostgreSQL listening port was a standard. I was using that as an example to show the range of different risk appetites I've worked to.

Though I would argue that PostgreSQL has a de facto standard port number. And thus by your own reasoning, running that on a different port would be "insecure" - which clearly is BS (as in this rebuttal). Hence why I called your "de facto" comment an exaggeration.

> There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation.

...but we are talking about this as a theoretical, unavoidable technical limitation. At no point was anyone suggesting this should be the normal way to send GET requests.

Hence why i said you're looking at this far too black and white.

My point was just that it is technically possible when you said it wasn't. But "technically possible" != "sensible idea under normal conditions"


I learned scala due to load testing with Gatling.

I’ve always hated Java but Scala was super fun.


How far back are we talking? 90s? 80s? 70s?

For the 70s, I would agree with you. But the moment home users, and particularly kids, gained access to the internet, you started to see a subculture of trolling.

Source: I was one of those 80s kids. It’s not something I’m proud of, but writing bots to troll message boards and scrapers for porn and warez played just as significant role in my journey into my IT profession as writing games on 8bit micros.


Depends on a size of community and where.

Early 2000s, public channel on a LAN with ~3k people in a post soviet country – say something stupid to a wrong person and you'll find yourself with a broken nose, because the guy/gal is a friend of the admin.


Obviously macro communities exist that differ in etiquette.

I was just responding to the generalisation made by the GP.


Well, yes. We definitely fucked with the systems, and to a lesser extent the people. But 80s internet didn’t have shit like swatting. Or what Mr. Beast makes people go through for entertainment.

And everyone was in on it. We were all trolling, and being trolled, and perfectly well aware of what trolling was. But now people deliberately target and exploit the vulnerable on the internet.

I feel like the only thing you needed before was a fairly thick skin, but now you need a lawyer and a smorgasboard of security.


Mr Beast isn’t the internet. He’s a TV show host. And there’s been exploitative TV shows for decades. This isn’t a format Mr Beast invented.

As for security, that was always an issue. Malware, denial of services attacks, etc aren’t a recent phenomena. And hacking was so prevalent that even Hollywood caught wind, hence the slew of hacker movies in the 80s and 90s (Wargames, TRON, Hackers, Anti Trust, Swordfish, Lawnmower Man, and so on and so forth).

The problem isn’t that internet etiquette has gotten worse. The problem is that there is so much more online these days that the attack surface has grown by several orders magnitude. Like how there’s more road accidents now than there was in the 70s despite driving tests progressively getting tougher (in most countries). People aren’t worse drivers, there’s just more roads and busier with more vehicles.


Remote work isn’t for everyone. Their point of view is just as valid as your point of view.

And this is my biggest complaint about arguments about remote working. People turn it into something that’s evidence-based when actually it’s a deeply subjective topic and thus different personality types thrive in different working environments.


People have painted themselves in a corner re: remote work and get wacky in discussions about it. Lots of emo, not much fact.

My point is you cannot have any fact based discussions because statistics are generalised whereas people’s ability to thrive in office or remote work depends deeply on both the company culture and the individual.

Thus it is always going to be an emotional argument rather than a fact-based discussion.

It’s like asking someone what their favourite food is. They might be able to explain why they like the dish, but that doesn’t translate to that meal being everyone’s favourite.


It’s a little more than that, because people generally won’t go to war over pizza vs tacos.

They probably might if they spent 40+ hours a week eating it and their bills, home, and family’s wellbeing, all depended on how well they eat pizza vs tacos.

But comparisons aside, some people will just argue over anything. Such as the meta debate we’re having right now over the stuff people will argue about. ;)


Touche!

> Lots of emo, not much fact.

People not wanting others to indirectly force them to relocate, work additional unpaid commute hours and pay for it is not "emo". It's a critical fight.


And people not wanting others to indirectly force them to subsidize their employers by devoting unpaid portions of their limited living space, utility bills and personal equipment to their work is also a "critical fight". Remote work inherently blurs the line between "company property" and "personal property" in ways that can impose heavy burdens on employees. Confidentiality and privacy requirements might require employees to allow spyware laden devices on their personal home networks. It might require them to create secure, isolated parts of their house that lock out other family members. It might require them to allow surveillance devices into their homes. Even if your employer buys you a secure safe or locking cabinet to keep confidential materials in, you still have to devote some of your limited floor space to having that item in your space, and you become liable for ensuring that item is secured in ways that you don't have to worry about when confidential materials are stored at a central office. Everything in life is a trade off and remote work is no different in that respect.

That sounds like a "them" problem. The cost of gas and the time I don't have because of commuting is material. I used to lose 2-3 hours of every. Single. Day. To commuting. All because the places where I can find jobs were either too expensive for me to afford to live in or, get this, I didn't want to uproot my family every single time I got a new job.

The cost of my utilities? Listen, I don't know know much electricity costs where you are, but the cost or running an extra computer is pennies a day. The cost of internet is set for me. We might talk about the increased cost of heating and cooling, but I was never one of those people who turned their system off when gone because that doesn't make literally any sense with my utility's time based pricing. It's literally cheaper to let it run as it is than than do that.

As for space and confidential items. I'm not sure ahat to say. I don't have thieves coming in and out of my house and I have a password good enough to defend the casual nosy child or relative. I have an office now because I have a house, but I have worked remotely in smaller spaces and it was never any problem. At least not compared to commuting 1 hour being, although I have commuted up to 2 hours on bad traffic days which were not particularly rare occurrences. And this is just have it is in all the cities I have lived. Perhaps not all cities, but the two metropolitan areas I have lived within and in the suburbs of. Living within the city didn't even guarantee me a reasonable commute.

If the trade off is a company getting a corner or a room I wasn't using anyway plus a few dollars of electricity subsidy and I get several hundred dollars in my time measure by my pay rate not commuting plus a couple dollar in not spending it on gas, I am happy to trade that. I'm also capable of putting my computer away and safely like literally anything else I own.

I also don't worry about the isolation that people mention (although not you here) because I have a vibrant social life. As someone who was never the typical demographic of the field, I have neve depending on socializing with my coworkers in office for social fulfillment. I still somehow maintain the correct level of social comraderie via digital means. Remote work doesn't mean not interacting with your coworkers at all.


Why are the problems of people who don't own large houses with spare rooms they can afford to dedicate to their company for free, or living arrangements otherwise conducive to working remotely a "them" problem, but your failure to live within walking distance of your jobs not a "you" problem? Perhaps the truth here is that your experience isn't a universal experience, and just because remote working works out exceptionally well for you doesn't mean that applies universally. Maybe people who want to work in offices legitimately find that to be a better way to work.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: