I mean, it's there any genuine case you can cover with SO that you cannot with your favorite LLM?
Because where a LLM falls short is in the same topic SO fell short: highly technical questions about a particular technology or tool, where your best chance to get the answer you were looking for is asking in their GitHub repo or contacting the maintainers.
>> I mean, it's there any genuine case you can cover with SO that you cannot with your favorite LLM?
Perhaps better than current models at detecting and pushing back when it sounds like the individual asking the question is thinking of doing something silly/dubious/debatable.
The main benefit for me is of the upvotes and comments telling me which solutions/approaches are better than others and why. Present day LLMs on their own don't have that context, nor the critical thinking.
As I see it, the next step is a synthesis of the two, whereby StackOverflow (or a competitor) reverses their ban on GenAI [0] and explicitly accepts AI users. I'm thinking that for moderation purposes, these would have to be explicitly marked as AIs, and would have to be "sponsored" by a proven-human StackOverflow user of good standing. Other than that, the AI users would act exactly as human users, being able to add questions, answers and comments, as well as to upvote and downvote other entries, based on the existing (or modified) reputation system.
I imagine for example, that for any non-sensitive open source project that I'm using Claude Code for, I would give it explicit permissions to interact on SO: for any difficult issue it encounters, it would try to find an existing question that might be relevant, if so, try the answers there, and upvote/comment about those, or to create a new question, and either get good answers from others, or to self-answer it, if it later found its own solution.
Yes, that's been making the rounds. While I do think that we're entering a period of substantial hardship, I don't think that genAI is the major reason for it. I have no doubt, though, that if genAI becomes what its advocates think it will become, it will be used in a way that will speed up the bad.
I wondered once about this, but it kind of make sense from the point of view of usability.
Unlike any webservice, you usually have very few attempts to make a successful login before getting locked out, so even if it's four digits, the odds of a successful brute force attack are very low
I'm in the process of considering a change, still not actively applying, but I do get several LinkedIn DMs every week from first party recruiters. I don't work with third party staffing agencies.
There is a common theme: they are looking for leadership roles with active contribution and with deep expertise in the tech stack used by the company.
So, where's the issue here? IMO, the market is saturated with generic software engineers, that is, people who can code, who are good at leetcode, but who really don't stand out of the crowd in any particular technology. That's your typical FAANG engineer. And don't get me wrong, there is a lot of talent in FAANG, but most engineers commit to the grinding to join and then just coast through.
And related to that, there is the unrealistic expectations game. As others have mentioned, people in FAANG were living in their own bubble of unreal financial compensations. Now that the bubble exploded, some have unrealistic expectations that decline even high offers just because it's not what they had before.
Tumbleweed was the last distro I used for a PC before I switched to macOS back in 2015.
Back then, it was already better than any other distro in terms of balance of stability, usability, and being up-to-date. Most similar was Debian Testing, but OpenSUSE's Tumbleweed was way ahead in providing a stable environment.
It's funny that you asked this, because just the other day I was thinking about starting a series of posts about it, but then I thought "what the hell, it's 2023, everybody should know about how to use Docker already" and diminished the idea.
A lot of people know. But a lot people doesn’t. E.g. if one is a freshman in college, then they probably are not yet an expert in containers. Or if someone learned about containers in 2015, they might not be up to date with the best practices of 2023.
IMHO there’s still room for hiqh-quality blog posts about containers. E.g. there are lots of gotchas that could be explained. E.g. if you keep your commands in a suboptimal order, you will not get the benefit of caching when building the container. And why use multi-stage builds etc etc.
I'm not sure why this is even on the front-page now, given that it's been released for many years already, but in any case, here's my two cents about this topic:
Containers are a misunderstood technology. People think about them as a privacy feature, but that's far from real. The only benefit of containers is the ability to have different sessions of the same service in the same windows/profile. That's it. A good use case is when you need to work with multiple AWS accounts.
Cross-site tracking is already enabled by default since... 2018? So, using containers as a way of dealing with cross-site tracking is utterly unnecessary.
It is a privacy feature, specially when paired with Cookie Auto Delete. I have CAD setup to always delete cookies when I close all tabs of a website. But I can add exceptions per website and per container. So I have a Google container that is the only place where I allow Google to leave cookies on my browser. I do the same for a several other websites. Sure, Google can always track you somehow, like IP address, but it's not as easy to track you around the Internet when you always use a fresh new session on each tab.
I used to use dark mode every where because it was somewhat accepted that the lesser bright the better.
But then I read a science-based post years ago about why light mode is better for our eyes, especially for people with astigmatism, as our eyes are naturally used to focus on objects on bright scenarios (sunlight).
Switched back to light theme right away, except for coding, because actually syntax highlighting is better with dark background.
Proposes that black on white text stimulates a pattern of expression in visual system that may contribute to myopia:
> Using optical coherence tomography (OCT) in young human subjects, we found that the choroid, the heavily perfused layer behind the retina in the eye, becomes about 16 µm thinner in only one hour when subjects read black text on white background but about 10 µm thicker when they read white text from black background. Studies both in animal models and in humans have shown that thinner choroids are associated with myopia development and thicker choroids with myopia inhibition. Therefore, reading white text from a black screen or tablet may be a way to inhibit myopia, while conventional black text on white background may stimulate myopia.
Doesn't your quoted passage say the opposite? Reading white text on a black screen may inhibit myopia, and black text on white background does the opposite?
Have a look at physiological sight, explained by neuroscientist Andrew Huberman.
Basically, it consists on doing an inhale following by another quick inhale, and then a long exhale.
Theory says that you repeat that pattern 2-3 times, and your stress is drastically reduced for whatever reason I don't remember right now, but he explains it very well in his podcasts and as guest in other podcasts as well.
I mean, it's there any genuine case you can cover with SO that you cannot with your favorite LLM?
Because where a LLM falls short is in the same topic SO fell short: highly technical questions about a particular technology or tool, where your best chance to get the answer you were looking for is asking in their GitHub repo or contacting the maintainers.