Hacker Newsnew | past | comments | ask | show | jobs | submit | darioush's commentslogin

A funny quirk about golang is you cannot have circular dependencies at the package level, but you can have circular dependencies in go.mod

The tl;dr is don't do that either.


go.mod files have no dependencies, so neither they can have circular dependencies.

go.mod files just list the dependencies of their respective package.


It's becoming a bit like iphone 3, 4... 13, 25...

Ok they are all phones that run apps and have a camera. I'm not an "AI power user", but I do talk to ChatGPT + Grok for daily tasks and use copilot.

The big step function happened when they could search the web but not much else has changed in my limited experience.


This is a very apt analogy.

It confers to the speaker confirmation they're absolutely right - names are arbitrary.

While also politely, implicitly, pointing out the core issue is it doesn't matter to you --- which is fine! --- but it may just be contributing to dull conversation to be the 10th person to say as much.


right, freedom of speech is free as long as it agrees with the viewpoint of who's in power. similar to how history is written by victors but this part is conveniently ignored. it's just facts in the open marketplace of ideas yay!


the difference here is if you search or seek something, i.e. explicitly consent to viewing advertisements for guitar in your active browsing session vs them being pushed to you without your consent the next day on your phone.

I'm not against monetizing advertisement for the 1st use case either.


don't you think it is empowering and aspiring for artists? they can try several drafts of their work instantaneously, checking out various compositions etc before even starting the manual art process.

they could even input/train it on their own work. I don't think someone can use AI to copy your art better than the original artist.

Plus art is about provenance. If we could find a scrap piece of paper with some scribbles from Picasso, it would be art.


This does seem to work for writing. Feed your own writing back in and try variations / quickly sketch out alternate plots, that sort of thing.

Then go back and refine.

Treat it the same as programming. Don't tell the AI to just make something and hope it magically does it as a one-shot. Iterate, combine with other techniques, make something that is truly your own.


oh they hate it so much when this hypocrisy is pointed out. better put the high school kids downloading books on pirate bay in jail but I guess if your name starts with Alt and ends in man then there's an alt set of rules for you.

also remember when GPU usage was so bad for the environment when it was used to mine crypto, but I guess now it's okay to build nuclear power plants specifically for gen-ai.


just an example of bureaucracy at work.

this what happens when you centralize all decision making to people who have no local knowledge of the community they are administrating, and predicate their jobs on following a checklist, usually as implemented by buggy software, instead of making a judgement call based on experience and circumstances.


If you go by the Marriam Websters definition of fascism this is getting pretty close.

> Fascism : a populist political philosophy, movement, or regime (such as that of the Fascisti) that exalts nation and often race above the individual, that is associated with a centralized autocratic government headed by a dictatorial leader, and that is characterized by severe economic and social regimentation and by forcible suppression of opposition


As if this has anything to do with locality? Rulers of parcels of land small enough you could see it all from a single hill have been utter despots before. Heck, just look at how some parents treat their children.

This is what happens when you elect a convicted felon, rapist, bullying loser compromised by the Russian state who outright says he wants to be a dictator and he puts sycophants and bullies into positions of power to do exactly that.

Saying this is about local vs nonlocal governance is nothing more than shirking the responsibility of 70,000,000 Americans who wanted this and 100,000,000 Americans who couldn't be bothered to stop it.


I do think that there’s an aspect of small government that connects those 70M to their government in a way that large scale federalization may fail to do. Not sure how to achieve benefits of scale without this problem…


Yes and the open source models + local inference are progressing rapidly. This whole API idea is kind of limited by the fact that you need to RT to a datacenter + trust someone with all your data.

Imagine when OpenAI has their 23&me moment in 2050 and a judge rules all your queries since 2023 are for sale to the highest bidder.


It doesn't need to wait until 2050. The queries would be for sale as soon as they stop providing a competitive advantage.


Even worse for these LLM-as-a-service companies i that the utility of open source LLMs largely comes down to the customization: you can get a lot of utility by restricting token output, varying temperature, and lightly retraining them for specific applications.

The use-cases for LLMs seem unexplored beyond basic chatbot stuff.


I'm surprised at how little their utility for turning unstructured data into structured data, even with some margin of error, is discussed. It doesn't even take an especially large model to accomplish it, either.

I would think entire industries could reform around having an LLM as a first pass on data, with software and/or human error checking at significant cost reduction over previous strategies.


The software-based second pass is where the most value lies (and the hard problems)


deferring to best practice instead of best judgement is a major plague of the software industry these days.

best practices usually come from giant companies with tens of thousands of engineers like google (who doesn't seem to be keeping up with competition btw) and amazon (which is notorious for burning out people).

what science or evidence drives the best practices?


> what science or evidence drives the best practices?

management "science"


this doesn't always work. many things can go wrong in distributed systems and you cannot test for all of them. also you have no control of your dependencies like when AWS networking degrades or a 3rd party API provider changes their APIs without letting you know.


True, but these things happen very very rarely. Also: 1. Is there anything you can do about it? No? Remove the alert, replace with a "we are down sorry" message. Yes? Then automate that thing.

Rinse and repeat after every incident and you will eventually get paged rarely.


I think if you have a reasonable environment, where on-call feeds back to development (which is what OP is suggesting, more or less), you will absolutely get woken up for networking problems, because there's not really an alternative. Maybe some thresholding to allow for minor problems without alerting, but you know. If it's a big enough problem, someone has to fix it, and it doesn't matter if it's your problem or your dependency's problem, it breaks your service so it's your problem. If it happens a lot, you look for another network to run on.

For 3rd party APIs, if they're not critical, you start to develop kill switches. So yeah, someone has to wake up and handle it, but all they have to do is set the kill switch and go to sleep.

Personally, I did dev and on-call for SMS/Voice verification codes. Most of the time, that's in the nasty corner of it's super critical to the application (users can't use the product if they can't get a verification code) and it depends on 3rd parties that have three nines on a good year. In my case, I got tired of dealing with the disruption and developed automated routing that could manage most providers taking an outage without needing me to take action. Results could be better if a human took notice and action, but it was good enough most of the time, and partial outages were much easier for the automation to handle it.

Even if there's no way to do something like that, at least automation can take care of 3rd party API is failing hard, so mostly return errors quickly without trying the API and only let a small fraction of requests go through to sample if the service came back online. That can keep your servers from getting overwhelmed, as well as drive the alert that helps you wake up, yell at the vendor, and decide if you can go back to sleep while the system takes care of itself.

When on-call is disconnected from development, that's when it gets really miserable. If you can swing a shift-work/follow the sun operator job, that's certainly better than on-call where incidents are common and there's no feedback loop to reduce things. It may well be better even if there is a feedback loop, but the feedback loop in that case requires explicit communication and effort; if I'm on call for my own work, I don't have to tell me to not push shit code right before I leave for the day or go to sleep, I'll get that message from myself right away. If someone else is cleaning up after my messes and doesn't communicate the effects to me, I might never know.


you are right, if your software is suffering regularly from the same issues - you go and fix them at the architecture level.

network issues? use a second DC, or some HA SDN setup, or run from a second DC.

3rd party API issues? Change vendor, or send stuff to queue to reprocess later. All of these issues could and should be solved and thats the job of the developer


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: