From this site, we have no indication that any actual person is putting any kind of good-faith effort towards trying any actual thing.
All we know is that a bunch of marketers are trying to find out: Given a certain amount of marketing effort, how many people can we sign up to a mailing list, and how many people can we get to pay a $99 reservation fee, if the product proposition is: You'll get a slick-looking "Developer-Terminal" (specifics to be determined) for $1.999. -- Based on that, they will decide whether they will lift a finger to figure out what the product should actually be, let alone put any resources towards developing it.
That's where the negativity comes from. They are eroding people's good will. Good will that is sorely needed when actual companies make actual products and need actual consumers to pay attention to actual product launches.
I think, the lesson learned from › Python v. R ‹ is that people prefer doing data science in a general purpose language that is also okay-ish for data science over a language that's purpose-built for data science but suffers from diseconomies. Specifically: Imagine a new database or something like that has just come out. Now, the audience that wants to wire it into applications and the audience that wants to tap it to extract data for analytics put their weight together to create the demand for the Python library. The economies for that work out better than if you had to create two different libraries in two different languages to satisfy those two groups of demand.
» I think that anyone who is technically sufficiently well-versed, is going to avoid that hellscape like the plague. So then, who is the actual audience for this stuff? My guess would be: the old folks' home around the corner, which, sooner or later, will be forced to upgrade those TVs to smart-TVs. And once those old folks put in their credit card numbers or log in with their Amazon accounts, there goes a lot of people's inheritance.
My own elderly father is wise to the scam, but not confident in his ability to navigate the dark patterns. So now, he is afraid to input his credit card information into anything digital, essentially excluding him from cultural participation in the digital age. « [1]
With that frame (the target audience for smart TVs is old people), "needing glasses" is not all that far-fetched.
I was going to disagree by saying that the menus are extremely confusing to the elderly. However if the goal is to extract money from them by generating confusion about what is an on-demand vs a streaming piece of media; you could not design a better software system. Reminds me of the theory that micro-transaction revenue in video games has driven menu UI in the direction of confusing and disorientating the player.
I was very much the kind of student who didn't perform well under exam-taking pressure. For marked work that I did outside of school, it was straight-As for me. For written exams performed under time pressure and oral examinations (administered without advance notice), it was very much hit-and-miss.
If my son should grow up to run into the same kinds of cognitive limitations, I really don't know what I will tell him and do about it. I just wish there was a university in a Faraday cage somewhere where I could send him, so that he can have the same opportunities I had.
Fun fact on the side: Cambridge (UK) getting a railway station was a hugely controversial event at the time. The corrupting influence of London being only a short journey away was a major put-off.
As a parent of a kid like this: start early with low stakes. You can increase your child's tolerance for pressure. I sometimes hear my kid saying sometimes after a deep breath, "There nothing to it, but to do it". And then work on focusing on what they did better versus how they did in absolute terms.
I see collapsing under pressure to be either a kind of anxiety or a fixation on perfect outcomes. Teaching a tolerance for some kinds of failure is the fix for both.
I've had various issues installing Void so I succumbed to Manjaro, which works surprisingly well. I have noticed in general that many non-systemd distributions work less well over time for some reason. Slackware is the best example - one release per decade means it is factually dead, but even trying more modern variants simply no longer works as it once used to work. At the same time there is less and less adapted documentation to be found. It seems the non-systemd distributions not only declined in absolute numbers but also in regards to manpower and time investment. MX Linux also lags behind updating versions of numers programs: https://distrowatch.com/table.php?distribution=mx - it has gotten a bit better after libretto, but it still lags behind compared to e. g. Fedora.
The thing where Void really stands out in my opinion is "hackability" and providing an inviting on-ramp for making certain kinds of customizations that would be intimidating in terms of their complexity in other distros. -- I don't use it as a daily-driver desktop.
For example, I have a chroot'able tarball providing all dependencies for all software I write that runs on a server. I build that tarball myself from source in an airgapped environment. (I had been doing something like this, minus the airgapping, with Gentoo from about 2012 until 2024). I looked for a replacement for Gentoo in 2024 and landed on Void. Most of the time when I do a build, I just pull the latest commit from the repo, and it "just works", even though Void is not even advertising itself as a source-based distro. Sometimes it breaks because of the distro itself (just like Gentoo used to). But, with Void it has always been so much easier to diagnose and fix issues, and the project is also quite inviting to merging such fixes. With Gentoo, it had always been quite painful to deal with the layers of complexity pertaining to Gentoo itself, rather than any package that has decided to break. Void, on the other hand, has managed to avoid such extra complexity.
Lately, I've started to play around with Void's tool for creating the live installer ISO. It's quite malleable and can easily be repurposed for building minimalist live/stateless environments for pretty much any purpose. I'm using that to create VM guests to isolate some contexts for security-purposes like a "poor man's Qubes OS" kind of thing.
There's an old presentation from late 90s or early 2000s where Linus talks a bit about the origins of Linux, how much he hated working with OpenVMS at university, and how much of a breath of fresh air Unix was. The reason he liked Unix so much was that "you could understand the whole system. Maybe you don't know exactly how the startup system works right now, but if you need to know then you just go in there and figure it out".
This fairly accurately describes why I generally prefer systems like Void (including for my daily desktop). Alpine has a similar experience (although I hate OpenRC), as do the BSD systems (mostly; there's some ridiculous historical complexity here).
Artix is great IMO. You can choose your init system. It's not for your grandmother perhaps. It's a rolling distro like Manjaro and I think it generally benefits from the Arch ecosystem. I only really notice it getting better since I've been using it for the last 3-4 years. The change to using pipewire for sound was unpleasant and the one other major problem I had was Chromium breaking Signal for a time. Everything else has been happiness.
As a Slackware I stick with release because it is extremely stable and my main System (Thinkpad W541) works fast as most modern systems with 16G mem and 8 CPUs. If I ever got a brand new system (doubtful), I would use current until the next release.
Also, I like the fact I do not have to install patches every other day. Plus PV keeps the applications I use most up to date in Release.
That is the good thing about Slackware, you have current for the adventitious and release for people like me. And both you admin the exact same way.
FWIW, I use regular Window Managers as opposed to desktops and my main programs are Emacs (latest version), vim (close to the latest), Firefox (latest), mutt, tin, irssi and some games that come with KDE.
Non-useful distros become unmaintained, that's how things are, regardless of service supervision. If you want a maintained non-systemd distro, try Alpine Linux.
Void is pretty well maintained, as of lately, and did not give me any headaches for last several years. That's a lot for a bleeding-edge rolling-release distro. Yes, its my daily driver.
(Alpine is great, but I did not try it as a daily driver, it's sort of not intended for that, it seems.)
> (Alpine is great, but I did not try it as a daily driver, it's sort of not intended for that, it seems.)
I can't say what it's "intended" for, but I run stock Alpine on a desktop and a laptop, and postmarketos (an Alpine derivative) on another laptop, and I assure you it works great as a daily driver.
There's no need to welcome me to war. I'm not in one, despite the fact that the powers that be are hellbent on getting me (and everyone else) into one.
Oh, yes. That's exactly how it works. No one would ask you "do you want to get into a war"? Ukraine didn't want to get into a war. Turns out it wasn't their call to make!
Least you can do is be prepared. If a hostile country believes "oh, they can't handle a war, it's going to be so easy", the risk of that country trying shit goes up. And if you really can't, the war would be more devastating than if you can.
The truth is that countries can't want anything, be hostile, or have any other personal trait simply because countries are not animate entities but rather a huge number of absolutely different people with different goals, and most of them would just love to mind their own business and don't want to kill and die in the name of some scum calling itself government.
Speaking of war as something inevitable, something unconditionally built in into human nature, and telling people who want peace to prepare, as in Orwell's "war is peace", simply reinforces the narratives of said scum and spins this morbid wheel up into total destruction.
Profits as a measurement of success, at this point, seems like a meme from yesteryear.
The left is perfectly happy to burn money to save the environment, feed the poor, educate the stupid, whatever. The right has joined the money-burning party, pushing for deglobalization, propping up dying companies and industries, re-establishing industries that haven't been viable in a long time. Who still cares about profits? Surely investors do? No, not really, certainly not in tech. Investors will happily throw money at tech companies who never make a dime, as long as they are vaguely saying the right kinds of things, being attached to the right kinds of hypes, and fitting in with the right kinds of herd dynamics.
Generating or destroying money is not going to help or hurt anyone's career these days. Being on the currently-career-promoting side of the "woke" issue. Now there is something to take seriously.
I've semi-recently gone down the TV platforms rabbit hole again, and my overall impression is that they're all horrible.
I ended up grabbing a 6-year-old mini PC I had lying around in the basement and a >10-year-old TV that my father-in-law was going to throw away, as well as a Logitech y-10 air mouse [1] that I am lucky enough to have bought way back in the day.
I put desktop Linux on the PC with KDE plasma (avoiding Kodi, which, somehow, consistently attracts me but then annoys and frustrates me whenever I actually use it) and Brave.
I cranked up the scaling factor in KDE, and made a tiny tweak so KDE won't ask for superuser passwords and passwords on wallet access.
The browser is the only app I ever use on that thing, although it also has a DVD drive and VLC, and I copied my film collection onto the local disk of that thing.
I logged into all the media platforms I pay for (and the free ones I frequently use) and made an HTML file that links to all of them, using a huge font size, and setting it as home in the browser.
It cost me $0 (considering all the recycling), and it's a better experience than anything that money will buy.
I actually like TVs as a hardware concept, and am a happy paying customer of several VOD platforms, so I would seem to be the perfect customer for all these sticks and mini boxes and smart TV thingamajigs. But the UX is just so horrible. Everything about them screams, “We hate our customers”.
Last time I tried, I found that the VOD platforms I care about have their respective best implementations in their Desktop/Web-versions. Android Apps were not always available, and to the extent that they were, half of them were on Amazon/Fire, half of them on Android TV/Google Play. I remember, in one case (Masterclass), they used the Android App to upsell me on their "Premium" subscription (or maybe it was the download-feature on the Android App).
So I would have had to pay more, switch between multiple HDMI sources to switch to the platform with the app I wanted to consume, and would still have had to use my desktop PC for some of the content I was paying for.
And then, I could never get the apps I actually cared about to occupy most of the screen real estate (or at least be suitably prominently placed). Most of the real estate was dedicated to dark patterns trying to get me to pay for stuff I didn't want to pay for, even though I was already a happy paying customer for more than enough stuff and there wasn't a “give it a rest, already” setting anywhere to be found.
I think that anyone who is technically sufficiently well-versed, is going to avoid that hellscape like the plague. So then, who is the actual audience for this stuff? My guess would be: the old folks' home around the corner, which, sooner or later, will be forced to upgrade those TVs to smart-TVs. And once those old folks put in their credit card numbers or log in with their Amazon accounts, there goes a lot of people's inheritance.
My own elderly father is wise to the scam, but not confident in his ability to navigate the dark patterns. So now, he is afraid to input his credit card information into anything digital, essentially excluding him from cultural participation in the digital age.
It's just such a sad and sorry state of affairs. How did we get here?
> I actually like TVs as a hardware concept, and am a happy paying customer of several VOD platforms, so I would seem to be the perfect customer for all these sticks and mini boxes and smart TV thingamajigs. But the UX is just so horrible. Everything about them screams, “We hate our customers”.
These things just spam analytics and ad requests 24/7 too. The only one that's tolerable (and quite good) is Apple TV.
That seems pretty opinionated, and by building a monoculture, persons with high openness to experience likely won't be drawn to your workplace, and you're also leaving on the table the potential that comes from diversity (a loaded term these days, but substantively still a valid point).
Depending on the kind of work you do and your customers, this may not matter to you, but in a lot of industries, you need the diversity to be able to properly represent and empathise with your customer base, who might be from a very different social cohort than your developers. And Linux desktops, which your customers almost certainly won't be using, may also make that difficult.
People who spend a ton of time ricing their Linux desktops may be bad at setting priorities. If you expect them to continue their ricing, but not do it "on the clock", you're implicitly age-discriminating and discriminating against people with families and/or hobbies and/or "a life".
Also keep in mind that your company is likely only one of a dozen or so workplaces that these people apply to in a given month, sometimes for many months before they land a job, and they probably haven't set up their computer specifically to impress you, but rather to fit the lowest common denominator among the requirements they face from all their application processes and educational activities, and some of that will require Windows.
Some points in parent comment are absolutely valid IMO. Would you hire a carpenter who walks back and forth to his toolbox to pickup at single nail at a time and then use the hammer with both hands near the head. And when cutting a 4x4 beam will use 1-inch strokes with the handsaw (again with two hands).
Using SSH to get the project files is not a good example of a hard to learn skill they need for the job, they should just have provided a zip on a web page or so or sent it directly to the user.
So to me it seems most of the test was just "have you done these trivial things before" rather than test if they can program web apps.
It would be like the carpenter being forced to buy nails and docking them points for now knowing where the closest shop to buy nails in are and taking time to look that up. Of course it is better if some look it up quicker, but its not a core part of the job. Then when they drove there, you gave them a manual stick car, so the ones who were used to automatic fumbled around in the car, also bad look! So now you see carpenters who drove manual were much better, as your biases told you! That is not really the skill you should care about, it is very quick to tell him where it is.
I don't think your analogy is apt. Having your development environment set up and knowing how to use your tools reasonably efficiently seems like it should be a low bar to pass. We're not talking about giving someone unfamiliar tools or an unfamiliar vehicle and expecting them to perform higher level tasks. We're just watching people use their existing tools to see if they're actually familiar and comfortable with them.
OP's post was "having custom themes" "bias against people using windows".
These aren't tests if a user can si their job or knows their tools. This is a cultural purity test to see if they have the same quirks as OP. And is a terrible way to judge if someone will perform.
I had my own reply, but using your analogy if I may: if I asked an apprentice carpenter to measure up and build some sort of structure in front of me with the tools provided, and they stumbled and made some awkward choices but the result was otherwise sound and they had other good qualities, yeah I'd consider hiring them. I think the scenario you describe though would be more equivalent to someone who flat out doesn't know how to even use a computer, which is a different case (I wouldn't hire that person).
I was thinking about basic tool use. I expect a junior engineer candidate to wield a computer way better than my mom. They are applying for a junior engineer position not and apprenticeship starting with close to zero training.
I have no problem with candidates who stumble and make awkward choices during the interview, as juniors by definition lack experience and the interview process is a stressed situation.
I think we need to stop seeing "can program" as being equivalent to "is an expert computer user", because those are genuinely two different, if related and overlapping, skillsets.
There is no particular reason that an expert C++ programmer also needs to know every keyboard shortcut or be an expert at the Linux command line, if those things are not actually relevant to the job you're trying to hire them for. Just because it's been common among millennial and older programmers (like most of us) to combine the two doesn't mean we should discriminate against programmers who don't fit that mold, as long as they're actually good programmers.
You're presenting a false dichotomy, though. No one in this subthread is expecting an expert computer user. We're just expecting them to have their development environment already set up, and to be familiar and comfortable with their tools.
If they come in with a Linux laptop but aren't comfortable with the command line, that's weird. If they come in with a Mac or Windows laptop and do solid work only with GUI tools, that's fine. If the job requires being able to use specific tools (command-line or GUI or whatever), then the interview should evaluate that as well.
Well, first of all, most likely dozens of them, at least, are good programmers.
In fact, most likely dozens of them will be perfectly good hires for the position.
The idea that you must hire only the single best possible candidate can lead to some pretty dehumanizing treatment of applicants, when the truth is that a) there almost certainly is no "single best possible candidate", there are many people who would do a roughly equally good job there, and b) your processes are almost certainly not optimized to actually find the true single best candidate for the job, but rather the person who is best at applying and interviewing for jobs among the candidates.
All that said, for "how do you actually design a better process"...I sure as hell don't know. I'm a programmer, not an HR person or hiring manager; that's outside my skillset. But that doesn't mean I can't accurately identify glaring flaws in the current system based on my understanding of human nature.
> I'm a programmer, not an HR person or hiring manager; that's outside my skillset. But that doesn't mean I can't accurately identify glaring flaws in the current system based on my understanding of human nature.
No, it pretty much does mean that.
Until you can come up with concrete improvements and understand the potential flaws in those proposals as well, you can't usefully critique the existing system.
Better example: you press the ok button, and sometimes, only sometimes, it triggers twice.
You tell your lead, they say "I know." You ask why they haven't fixed it, and they lead you down a deep rabbit hole of fundamental, unsolved issues with event bubbling, and show you the 20 different workarounds they've tried over the years. "In the end," they say, "nobody's figured out how to not get it to sometimes fire twice."
Thus hiring. Sure it looks not right to you, but, come join us in hiring and you'll see, a better way has yet to be found. At least when I run interviews it's an actual real problem rather than a leetcode thing - I always just grab something reasonably difficult I recently did for the company and convert it to an interview problem.
Your guess that ~24/200 will be "good enough" is unfortunately wrong in my experience. In my last go, only 10/200 were able to demonstrate sufficient knowledge of the required skills to be hireable, and by that I mean fulfill the needs of the role in a way that justifies their salary, rather than be so inexperienced as to be a drain on resources rather than net gain. Of that 10, 2 were the best. Criticizing not wanting to work with the best doesn't make sense to me. Lemme put my capitalism hat on, there: we have competitors, we need to code faster than them to get clients before them. If we don't, we lose revenue and don't get another funding round, and the company dies. Also all hiring is reported back to the investors who have an expectation we get good people. Also we give equity - why wouldn't I want the best possible people on my team so that I have the highest chance of my equity paying out big?
Capitalism hat off, yup, this system is dehumanizing and not configured to deliver the greatest societal good. Alienated labor detached from the true value of the goods produced, absolutely. What can I do about that at my job? On the side I run a co-op that operates under literally the opposite principles: anyone can join, we will train you to get the skills you need to get better jobs, and no margin of the rate is siphoned away for a capitalist class.
The problem with your last paragraph, of course, is that there is no "system", no generalizable concept of "societal good", no such thing as "true value" independent of the subjective evaluations of an object by disparate parties, and no "capitalist class" that actually exists as such.
Everything is down to particular patterns of interaction among particular people, all acting on their own a priori motivations, with none of the reified abstractions you're referencing actually existing as causal agents.
I applaud your efforts with the co-op you're describing, and if you're able to make it work, scale up, and sustain itself in the long run, more power to you. But it's a bit strange to imply that in the more common scenario, it's somehow untoward for the people paying the upfront costs of your endeavors -- and indemnifying your risk exposure -- to expect a share of the proceeds in return.
> Your guess that ~24/200 will be "good enough" is unfortunately wrong in my experience. In my last go, only 10/200 were able to demonstrate sufficient knowledge of the required skills to be hireable, and by that I mean fulfill the needs of the role in a way that justifies their salary, rather than be so inexperienced as to be a drain on resources rather than net gain.
I mean, this is fundamentally dependent on the specific position being hired for.
I'm interested in this conversation however this comment doesn't really mean anything to me. What are you saying? And so, how would you hire? If you just wanted to say, "hiring sucks," I agree. Hiring sucks.
This comment is saying "the percentage of any given 200 programmers applying for a job that are, in the end, reasonably fit to do the job depends on the job being applied for".
If the job is a mid-level C++ programmer job at an insurance company, many more of them are likely to be good fits than if it's a senior embedded systems architect job at an aerospace firm.
Yeah, seems I'd be doomed for doing interviews on my Windows laptop for the webcam and the compatibility with my bluetooth headset rather than my linux desktop. Tunnel vision and a lack of empathy are negative signals, so you could say I'd have dodged a bullet, but unfortunately in these situations I'd need the job more than the job would need me.
From my perspective, I think it shows poor judgment that you've chosen hardware where you can't get your webcam and bluetooth headset to work under Linux. Or you haven't bought a wired headset and a USB webcam that you've researched and know works properly under Linux. It sounds like the alternative you've chosen is to take interviews using an OS and development environment you're not comfortable in, which seems like a foolish choice to make.
These days it's not difficult to buy hardware with Linux in mind, even on a budget. (And if you have two computers, I feel like budget is probably not really a problem for you.)
Even if you bought with Linux in mind, there’s no guarantee what works today works tomorrow. I have a Linux box that had working HDMI audio. A recent set of updates to the audio subsystems now means that when the box boots, the HDMI devices are muted by default. It seems like something changed in the order that things are initialized and the HDMI devices aren’t present when the audio system is initialized so none of your saved settings are re-applied. Every boot now requires manually fiddling with the audio settings to reset where audio should be going and unmute the devices. If my choice when going into an interview is the box that you need to buy specific hardware for and hope that no one re-configured the audio subsystems in the last update or the box that runs an os by a company known to put explicit code paths to recreate a prior edition bug that other software relies on… I might choose the latter too, even if it is windows. Of course I’d also probably choose the latter because there’s a non zero chance the interviewer is going to want to use some conferencing software or website that doesn’t work in Linux regardless of how well matched my hardware is.
I simply don't use a headset or a webcam with my home Linux setup, because it's often been a hassle to deal with, and my work setup is separate. For screen-sharing type coding interviews, I have a perfectly functional remote SSH setup from my windows laptop, which took less time to setup than the last time I fiddled with a bad headset connection on Linux a long time ago. I find the "not using all of screen real estate" accusation to be strange as well, because usually screen-sharing is ergonomically limited to 1 screen, but people may usually work with two screens and not have developed habits to make 1 screen work.
> sometimes for many months before they land a job, and they probably haven't set up their computer specifically to impress you
I wouldn't expect them to. I would expect them to have their computer set up to program. If it's not set up for programming, then, that's ok, they just won't fit in in an environment of people who really, really enjoy programming, and most likely aren't able to program at the level we expect. This theory worked out - about 10% of candidates were the kind that program regularly, for fun, or at least to build their portfolio, and of that 10% the one we ended up hiring turned out to be phenomenal.
Like I said, the people who got furthest in the interview (solved the most problems) were the ones who had computers set up to program and were comfortable in their environment. Everyone got the same email, everyone knew they'd need to clone a repo and run node, and everyone who got the email had already passed the initial screening so I'd expect them to actually start reading our emails and taking this stage of the interview process seriously, considering it was the final stage (and the only stage involving actual programming).
> you're also leaving on the table the potential that comes from diversity (a loaded term these days, but substantively still a valid point).
Diversity comes in many forms. Someone not great at programming, or not that interested in it, I'm happy to select against. Do you have a reason I shouldn't filter these folks out? We're paying someone to code at the end of the day so I'm pretty confused at all this pushback to my bias towards "interest in computers."
The other diversity markers I don't think were selected against - I have no idea what "high openness to experience" means but we had people with all sorts of different personalities and interests that we interviewed, all sorts of backgrounds, and sure all sorts of different gender expressions, national backgrounds, refugee status, race, so on.
> People who spend a ton of time ricing their Linux desktops may be bad at setting priorities. If you expect them to continue their ricing, but not do it "on the clock", you're implicitly age-discriminating and discriminating against people with families and/or hobbies and/or "a life".
Sure, and every hiring manager that puts people through a coding interview is implicitly engaging in ableism - someone with severe mental disabilities won't be able to pass the interview. Capitalism is ableist. I agree. They also had to have right to work - something I personally don't give a shit about but the State does. What am I supposed to do about it?
Anyway interest in computing and "having a life" or hobbies or a family aren't mutually exclusive. At all the companies I've worked in, I've been surrounded by super nerds with families and other hobbies, alongside interest in computing. I've known a mom that went sailing every weekend and programmed circles around me, a married individual running a pokemon selling business and a lasercutting etsy store on the side all while having the healthiest marriage I've ever seen and personally aspire to, folks that brew beer or garden or make cheese, a hella greybeard that runs DND (and ran a campaign for the office alongside two others he was running)... all of these people I mentioned far better programmers than me, far more advanced knowledge of computers than me, and I don't do even close to that much outside of computer stuff.
So, I don't know what to say other than I guess the last few companies where I worked and ran interviews at at just had really energetic people and wanted to hire more energetic people? That's something to criticize?
You mention IDEs besides VS Code, Linux, and ricing as “green flags”. Those are not proxies for “being a good programmer” and they are not proxies for “being enthusiastic about programming”. They're just selecting for programmers who share your subjective preferences on matters that are the equivalent of “vi vs. emacs”.
The only workplaces that realistically allow people to use Linux desktops are academia and top-5%-sexiness-factor startups. The other 95% of us have to use what our boss tells us to use (and he got told by the insurance company that scammed him into cybersecurity insurance). Those of us who have families, don't spend our leisure time staring into yet another desktop computer that isn't our work machine, so how, on earth, would we be using Linux desktops?
Conversely: Imagine someone has spent an 8-hour-workday setting up their tiling window manager, so they can “improve their productivity” because now they don't need to spend 2 minutes painting all their windows into the right positions in the morning. That's an investment of time that takes (8*60)/2 = 240 days, so roughly one work-year, to amortize. What does that tell you about the time management skills of that person?
I don't say that to knock tiling window managers: If you're into it, be my guest. It's perfectly fine for reasonable people to reasonably disagree on those kinds of subjective preferences. That's what subjectivity is all about. And that's why it's valuable to hire a diverse range of people who have different viewpoints on these kinds of things.
EDIT: ...and that's what I mean by "diversity": To include both family-people and people without families. Young and old. People with an academic background and people without one. Vi-people and emacs-people. Please don't strawman me by bringing up disabled people and work permits and whatnot.
> To include both family-people and people without families. Young and old. People with an academic background and people without one. Vi-people and emacs-people.
Well, I don't know what to tell you, you've just described my entire team, same as my previous company that had a bunch of linux/unix dweebs, so the fact that we're all really into computers hasn't hindered us from this diversity.
All my jobs have let me choose my OS, all my jobs were full of people exactly as you describe, they just all happened to be really into computers. How the family folks found time for it is a question I still ask myself to this day when I compare my knowledge to theirs, but regardless, it wasn't a hindrance.
So I maintain my confusion around selecting for this. It's just not my experience that it prevents me working with family folks, or old folks, bootcamp kids and phD ML people.
So, ok, we've come this far: How do you run interviews? What's the alternative to the way I've seen?
> You mention IDEs besides VS Code, Linux, and ricing as “green flags”. Those are not proxies for “being a good programmer” and they are not proxies for “being enthusiastic about programming”. They're just selecting for programmers who share your subjective preferences on matters that are the equivalent of “vi vs. emacs”.
Are you sure about that? I have a strong suspicion that there may be a measurable correlation between using IDEs and other tools that aren't currently trendy/overhyped and having a stronger-than-average foundation of experience.
Given that VSCode is the big, trendy IDE at the moment, doesn't using a different one indicate that a developer has, at minimum, invested some thought and effort into considering alternatives and making a conscious choice?
> The only workplaces that realistically allow people to use Linux desktops are academia and top-5%-sexiness-factor startups. The other 95% of us have to use what our boss tells us to use (and he got told by the insurance company that scammed him into cybersecurity insurance).
This is super ironic and shows your "us" in "the rest of us" is a tiny, marginal group, maybe "Silicon Valley programmers" or something. In most small software companies they couldn't care less what you use, the only thing they look at is perceived "speed". You could install Red Star OS and get a few pats on the back if you're closing the highest number of Jira tickets.
Hell, nowadays more and more are full remote and the devs do their work on their personal device. Or they do BYOD. Work devices are a cost center.
It's the opposite, the only places that force a particular OS are the top companies for whom compliance, fleet management and such are priorities.
Kind of funny, these mutually escalating accusations of being a member of an out-of-touch elite.
All it takes for BYOD to become difficult is having to handle personally identifiable information under the rules of the GDPR, or having some kind of professional indemnity insurance with cybersecurity provisions, perhaps having quality management certification, being in certain highly-regulated professions like law or medicine, being in the public sector, working government contracts, etc. etc. (the list goes on and on) -- I'm just finding it hard to believe that this list doesn't capture most companies.
But then again, maybe, I am a member of an out-of-touch elite.
Lecturing on "good taste" is a huge red flag for narcissism. "Taste" implies subjectivity. Pairing it with "good" is presupposing something along the lines of "my subjective evaluation of things is superior to yours", or "my subjective choices are superior to yours".
Not at all. Vanishingly few things during the development process of a novel thing have truly objective measures. The world is far too complex. We all act and exist primarily in a probabilistic environment. A subjective evaluation is not so different than simply making a prediction about how something will turn out. If your predictions based on subjective measures turn out to be more correct than others, your subjectivity is objectively better.
Hence the author's main point: a good taste is one that fits with the needs of the project. If you can't align your own presuppositions with the actualities of the work you're doing then obviously your subjective measures going forward will not be very good.
This is something I wrestle with. Objectively, it'd seem true that say, a Henry Moore sculpture is of "better taste" than Disneyland. ;) But I 100% wouldn't wanna criticise anyone who preferred Disneyland. Its up to them, they don't have "poor taste" for preferring that... its arrogant indeed to make such a judgement, but then again... surely.. Henry Moore, Disneyland... there's no comparison? ;) so I go around in circles... ;)
That's exactly how it works in most fields that are not purely engineering but where the space of design solutions to do X is huge. Architecture, software development, ...
If I correctly catch the drift of your argument, you're saying “engineering is objective”, so there is such a thing as a right and wrong choice in any given situation. ...well, to the extent that that's true, the word “taste” is a poor choice of words then. Actually, I think that's the case for this article. I think the article is fine, but the title and “taste” as a choice of word is not great. The article is more about intellectual humility and subordinating your individual priorities underneath the requirements of the project, which is all perfectly fine.
There are some domains where the word “taste” can still properly be applied, for example “vi vs emacs” comes down to individual taste. But then, “emacs people have poor taste” is something that only a narcissist would say. (The “narcissism of small differences” is a well-studied phenomenon).
Or perhaps one uses this choice of words because one feels some sympathy for people who say this in other domains, like “This room, filled with IKEA furniture and film memorabilia, was decorated in poor taste”
… either way, the red flag seems to stick.
The reason it's worth mentioning is that the notion seems to be catching on, and I've seen it applied, for example, in hiring decisions, where I think it's quite dangerous and counterproductive. It lends itself to rationalizing hiring only like-minded people, even where there is no objective ground for preferring one candidate to another.
I couldn't say emacs developers have poor taste but I could say its' not my taste. I don't have to disrespect them. People think in different ways and get used to different things.
e.g. I might decide that some clothes, although well made and possibly even very fashionable, are not my taste. The superiority/inferiority of taste is something that insecure people focus on IMO. A tasteless thing would be something that doesn't seem to show any overall philosophy of design or something which is bombastic - it goes to town on some aspect at the expense of all others - there's a lack of balance. Even then, who cares?
If I wear a bright tomato-coloured suit because I like colour, why should that make me a bad person? It's only when other people have to accept your tastes because they work with you that they're going to moan about them.
The problem with software is that while it is in some sense objective, one of the most important properties of good software is that it can adapt to future requirements, and that's something that can only be evaluated in hindsight.
Lacking an objective way to predict that, we turn to taste.
I think taste is one of those things that we use to describe something that is a bit difficult to judge.
Something might not be to my taste but it can be good and workable nonetheless. It has taken some decisions that lead to solving a problem in a certain way and I can see that that way works and can be extended but it might not be a way that pleases me. Perhaps this is because it forces me to think in a mode that I am not generally accustomed to.
I think that depends on context. In some cases (sweet vs salted popcorn) perhaps we could say that, but in others (rotten vs fresh meat) it may well still be subjective (there are people with heterodox taste preferences out there!), but I wouldn’t take it to be a red flag for narcissism.
There are plenty of subjective preferences that we can make comparative claims about without any risk of narcissism.
Subjectivity is fine when it is backed by experience and knowledge. If anything, the narcissist perspective is the one where you claim expert opinion doesn't matter because it's all subjective and it hurts your feelings when people criticize your work (or your "taste").
From this site, we have no indication that any actual person is putting any kind of good-faith effort towards trying any actual thing.
All we know is that a bunch of marketers are trying to find out: Given a certain amount of marketing effort, how many people can we sign up to a mailing list, and how many people can we get to pay a $99 reservation fee, if the product proposition is: You'll get a slick-looking "Developer-Terminal" (specifics to be determined) for $1.999. -- Based on that, they will decide whether they will lift a finger to figure out what the product should actually be, let alone put any resources towards developing it.
That's where the negativity comes from. They are eroding people's good will. Good will that is sorely needed when actual companies make actual products and need actual consumers to pay attention to actual product launches.