The goal is to tackle it in every way. The medicines are supposed to be supportive and not the solution. More often than not people treat it as a solution.
Thats why they are eventually tapered and discontinued once you are able to be on your own.
As someone who tried citalopram escitalopram and sertraline, along with venlaflaxine and fluvoximine, I would suggest doing a pharmacological test for psychiatric medications.
I am an intermediate metabolized for the first three and the ones I was on most long. It did not suit me and made my orgasms go from ‘wtf’ to ‘that’s it?’ And they are still not normal 2 years after discontinuation.
I am still depressed and anxious to the point of serious consideration of these medicines to save myself, but you can save yourself the experimentation by doing a simple test and avoiding those medicines.
Anxiety depression panic attacks are something I wish more people studied along with sexual health.
That’s B3 for nerve and dna repair if I understand correctly.
I notice a difference after eating non vegetarian food but since I also have IBS, my absorption can be a hit or miss.
A very balanced and nutrient rich diet with good sleep and exercise is the start to mental health recovery. People often forget how important all this is.
for ibs: no coffee, milk (any milk), vegetable oils, wheats, soy, or anything from USA :D and no sugar or alco
then slippery elm and probiotics, d-vitamin + c-vitamin.
fasting, intermediate is also helpful.
add berries and yes, beef meat and lots of greens.
with strict diet ibs goes better.
Training is really cheap compared to the basically free inference being handed out by openai Anthropic Google etc.
Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.
Significantly complex means when ORM starts to become bigger and bigger and you need multiple threads and more complex processes that run in workers. When you start to run into scaling problems, your solution is within that framework and that becomes a limiting factor from my experience.
Then as a programmer, you have to find workarounds in Django instead of workarounds with programming.
PS: Dealing with a lot of scaling issues right now with a Django app.
The framework itself is not the limiting factor. The main constraint of performance usually comes from Python itself (really slow). And possibly I/O.
There are well established ways to work around that. In practice, lots of heavy lifting happens in the DB, can you can offload workloads to separate processes as well (whether those are Python, Go, Rust, Java etc).
You need to identify the hotspots, and blindly trusting a framework to "do the job for you" (or for that matter, trusting an LLM to write the code for you without understanding the underlying queries) is not a good idea.
I'm not saying you are doing that, but how often do you use the query planner? Whenever I've heard someone saying Django can't scale, it's not Django's fault.
> When you start to run into scaling problems, your solution is within that framework and that becomes a limiting factor from my experience.
Using Django doesn't mean that everything needs to run inside of it. I am working on an API that needs async perf, and I run separate FastAPI containers will still using Django to maintain the data model + migrations.
Occasionally I will drop down to raw SQL, or materialized views (if you are not using them with Django, you are missing out). And the obvious for any Django dev; select_related, prefetch_related, annotate, etc etc.
Yeah, I don’t get the issues here. I’ve led projects that served millions of requests a day, had dozens of apps and while there are always going to be pain points and bottlenecks, nothing about the framework itself is a hinderance to refactoring. If anything, Django plus good tests made me much braver about what I would try.
> And the obvious for any Django dev; select_related, prefetch_related, annotate
And sometimes not so obvious, I have been bitten by forgetting one select_related while inadvertedly joining 5 tables but using only 4 select_related: the tests work OK, but the real data has a number of records that cause a N+1. A request that used to take 100ms now issues "30 seconds timeout" from time to time.
Once we added the missing select_related we went back to sub-second request, but it was very easy to start blaming Django itself because the number of records to join was getting high.
The cases that we usually walk out of the Django path is for serializations and representations, trying to avoid the creation of intermediate objects when we only need the "values()" return.
django-seal offers optionality on this front. You can choose to use Django's default behavior or opt into sealing for when you're working on code where you _really_ want to avoid N+1's
You may already know this, this is meant for others hitting this issue frankly.
In Django, you can count the number of queries in a unit test. You don't need 1M objects in the unit test, but maybe 30 in your case.
If the unit code uses more than X queries, then you should assume you have an N+1 bug. Like if you have 3 prefetch related and 2 select related's on 30 objects, but you end up with more than 30 queries, then you have an N+1 someplace.
Even better that unit test will protect you from hitting that error in the future in that chunk of code accessing that table.
> Then as a programmer, you have to find workarounds in Django instead of workarounds with programming.
The mental unlock here is: Django is only a convention, not strictly enforced. It’s just Python. You can change how it works.
See the Instagram playbook. They didn’t reach a point where Django stopped scaling and move away from Django. They started modifying Django because it’s pluggable.
As an example, if you’re dealing with complex background tasks, at some point you need something more architecturally robust, like a message bus feeding a pool of workers. One simple example could be, Django gets a request, you stick a message on Azure Service Bus (or AWS SQS, GCP PubSub, etc), and return HTTP 202 Accepted to the client with a URL they can poll for the result. Then you have a pool of workers in Azure Container Apps (or AWS/GCP thing that runs containers) that can scale to zero, and gets woken up when there’s a message on the service bus. Usually I’d implement the worker as a Django management command, so it can write back results to Django models.
Or if your background tasks have complex workflow dependencies then you need an orchestrator that can run DAGs (directed acyclic graph) like Airflow or Dagster or similar.
These are patterns you’d need to reach for regardless of tech stack, but Django makes it sane to do the plumbing.
The lesson from Instagram is that you don’t have to hit a wall and do a rewrite. You can just keep modifying Django until it’s almost unrecognizable as a Django project. Django just starts you with a good convention that (mostly) prevents you from doing things that you’ll regret later (except for untangling cross-app foreign keys, this part requires curse words and throwing things).
Recently I got to know at an interview how company A (big western company) acquired another company years ago and are now working on redeveloping code that was an important part of the acquisition which fails to scale beyond the original use case.
This kind of stuff during interviews is a lot of learning in itself especially if you’re working already in the same area.
The original problem is stable ‘matching’ which used ‘marriage’ as an example iirc.
I’m sure dating apps use plenty of such algorithms and match profiles based on ranking. Like the number of women going out with a shorter guy for example would be quite low on apps because their matching preference would be taller men which can be inferred from their swipes.
The article suggests a rather arrogant tone or maybe frustrated one to defy odds somehow ?
The key thing to note is that take your chances, shoot your shot and don’t take rejections personal.
Bloomberg like companies employ thousands of people who’s livelihood depends on their salaries. This in turn helps them pay rent and spend money. The economy is interdependent on a lot of such bad businesses. While on the outside we may think this is a scam it would be very difficult to crack them without having mass effects on the population. What happened in dot com bubble or 2008 when these scams collapsed is an example of what can happen when these unethical and illegal companies collapse.
This is of course a capitalist economy where the government cannot provide simple benefits like free healthcare and subsidized higher education in exchange for high competition and high churn rate of businesses and startups.
So the solution is to tame these beasts from time to time and use that as political agenda to win votes and try to keep things sane.
Resume glorification and LinkedIn / GitHub profile attention do that.
I am seeing a lot of people coming up with perceived knowledge that's just LLM echo chambers. Code they contribute comes straight out of LLMs. This is generally fine as long as they know what it does. But when you ask them to make some changes, some are as lost as ever.
Torvalds was right, code maintenance is going to be a headache thanks to LLMs.
At this point I won’t consider any GitHub activity after ~2024 as hiring signals unless it’s very substantial work on high profile projects that clearly have high bars.
We had a bootcamp in our city that had all students build a GitHub portfolio. They all built the same projects like a TODO app. Every person’s code would like almost identical because they all did them together and, I suspect, copied from past grads.
They all applied to the same local jobs, too. So we’d get a batch of their resumes with GitHub links, follow the GitHub links, and see basically the same codebase repeated everywhere.
I kind of suspected that some bootcamp or college or something is telling all these people to just go to GitHub, create an account, spam it with activity, and you'll get a job! At this point I don't think "has a GitHub account" can be used as any signal of programming ability whatsoever.
I mean I never considered having GitHub projects as anything. If you have project(s) that seem useful and have let's say a hundred stars or more (rough signal assuming no foul play), I'll have a look. If you say you have meaningful contributions to projects with a thousand stars or more, I may have a look as well.
Now my bars are so massively higher, 99.95% of juniors who don't have pre-2024 work to show can forget about it.
> Torvalds was right, code maintenance is going to be a headache thanks to LLMs.
I know someone in a senior engineering position at Epic who does nothing but clean up PR's from their off-shored Ukrainian sweat shop coders handing in AI slop because all they need to do is close a ticket to get paid. They wind up rewriting half or more of it. Epic doesn't seem to care so long as this "solution" works and saves them money by paying a few really smart people to code janitor until hopefully all of them can be replaced by LLMs.
As if I needed another reason not to hire people coming from Epic.
For a company as financially focused as Epic it's surprising to me they'll pay the offshored devs for simply submitting code even if it doesn't work and needs to be rewritten.
I’m actually of the mind it will be easier IF you follow a few rules.
Code maintenance is already a hassle. The solution is to maintain the intent or the original requirements in the code or documentation. With LLMs, that means carrying through any prompts and to ensure there are tests generated which prove that the generated code matches the intent of the tests.
Yes, I get that a million monkeys on typewriters won’t write maintainable code. But the tool they are using makes it remarkably easy to do, if only they learn to use it.
I’m not sure why the downvotes. I think the poster is basically saying the same thing as this YouTube-er. I read “Million monkeys” as referring to LLMs.
It's sad to see apple go from customer experience first to investor satisfaction first. There is a lot of pressure on iOS26 and Tahoe being bloated and slow with planned obsolescence clearly taking centre stage.
People can hope that apple takes their operating systems as seriously as their ARM chips, but it doesn't seem likely. A cycle of 'performance and bug fixes year' will happen which gives them an excuse to bloat further operating systems that are in the pipeline. This is the worst part. We will fix now to show you we can do it and then bloat it in subsequent years so that you upgrade your devices.
Thats why they are eventually tapered and discontinued once you are able to be on your own.
reply