Hacker Newsnew | past | comments | ask | show | jobs | submit | mind-blight's commentslogin

This is a weird one. It absolutely should not be haphazardly added as a rider. The 0.4 per container is also insane. But, this really was an unintended loophole of the 2018 farm bill. Most plants grow THCa, which turns into Delta-9 when heated. They were ignorant and straight up forgot to specify anything except Delta-9.

Cannabis is a bioremediator and absorbs basically every environmental toxin from the ground (pesticides, heavy metals, etc.). Extraction (for CBD and THC oil) increases the concentration of any present toxins.

The only way you know of the problem is by thoroughly testing every batch. Pesticides that are safe at low levels can get concentrated and become really problematic at high levels.

States where marijuana is legal require all of this testing, so the products are much safer. Hemp-derived THC does not require these tests. (Same is true for CBD, but that's a while other conversation...)


There is pretty extensive testing throughout the industry. Small hemp farms don't want to murder their customers or themselves.


It's night and day. It's also about access. The labs in legal states usually just test for more things. For a while, it was one or two labs plus a Cole extraction companies that were pushing the testing boundaries. Then, relation caught up and pushed the broader testing on to everyone. Then there were managed batch sizes (though these got too small in Oregon). Hemp does not have the same regulations, and unregulated states have way less infrastructure (including access to good labs).

Nobody wants to harm their customers, but it 100% happened in the early days. A lot of harm is/was not immediately obvious. Of was repeated exposure to harmful chemicals. Good intentions are great, but resources and incentives still matter. Nobodyv wants to get hacked, but building a new feature over hardening is what stops you from getting yelled at


So their team is anonymous. While I understand the desire for that, trust is built through transparency. It's really hard to convince someone who's job, career, it potentially even life is at risk to trust random strangers on the Internet.

It seems like they need people willing to stretch their name to create credibility.


Have we forgotten you can authorize witho authenticating? I can prove I'm inside the Google office without saying who I am


The point is that how does the whistleblower know whether or not they are not whistleblowing to the very people or allies to those being reported on if who is behind it?

To pull an example out of thin air, would you risk whistleblowing to TruthWave on Amazon if you knew that the Washington Post was running TruthWave?


Or, would you whistleblow on Tesla, if you knew any out of a hundred companies was behind it, like Meta, Alphabet, Amazon, ...? About the only "big" entity I MIGHT trust would be Berkshire Hathaway..


I would trust the Washington Post with a sensitive tip more than I would trust an Internet project.


I think this trust (in the Post) is now misplaced, and in the case of the Post and Amazon, you absolutely shouldn't. But perhaps it always should have been with any single newspaper.

This is why whistleblowers now often work with two different organisations with different ownership/politics, or in different branches of media, or with a journalist backed by the ICIJ (e.g. the Mossack Fonseca leak investigation was shared with the ICIJ).

But yes, any generic online whistleblowing broker with dozens of concurrent cases is going to be such an obvious target for state or organised crime interference. Anyone making a business of brokering whistleblowing for a cut of the reward is an obvious risk.


I would trust a Murdoch paper more than I would trust this site; I would meaningfully trust the WSJ, and I don't trust this at all.


Wrong direction, parent is asking for clarity who owns and operate the platform itself, not clarity around who the whistleblower is.


Does that prove much? I have been inside a Google office without ever having worked for Google (visitor).


Then the service seems to provide zero value, there are already “untrusted” platforms. If i have to anonymize myself anyways, i can just post on Reddit/Twitter/Orange site directy.


took me all of 2 minutes to put a name to one of the folks involved in the project.

i think this is a good goal but i question the platform, based on this point.


I mean, 35 years ago, a random stranger on the internet was MORE trustworthy in my eyes than some people I knew face-to-face.

These days? Pfft...


We all know how this ends lmao


Ah, that's a shame they went that way. What I tell people when they're first getting into GCP is that it's gonna be a passion to set up what you want. But, the offerings are great, and once it's working, it'll just work


I'm gonna be honest, I've been developing with react for about 9 years across a lot of projects and companies. I've never used next.

Maybe I'm out of touch, but I don't understand why people think it's so tightly could with the ecosystem


There is a large amount of what _might_ be described as astroturfing on the part of vercel to push Next. More charitably, vercel/the next community publishes a very large number of good tutorials/boilerplates/etc that are built on top of next.js.


If you check the docs for how to create a react app the first thing they recommend is to use next.js.


Oh interesting - I haven't been on their starting page in years. I'm surprised getting started with vite isn't higher up. That takes 5 minutes and doesn't require a full framework.

That said, starting with react router or expo is probably the right call depending on the project needs. Routing is not something you want to do yourself, and react native is pretty unfriendly without expo


One issue we're running into at my job: we're struggling to find entry-level candidates whoaren't lying about what they know by using an LLM.

For the tech side, we've reduced behavioral questions and created an interview that allows people to use cursor, LLMs, etc. in the interview - that way, it's impossible to cheat.

We have folks build a feature on a fake code base. Unfortunately, more junior folks now seem to struggle a lot more with this problem


The thing about entry-level candidates is that we expect them to know relatively little, anyway. When I've been delegated to participate in interviewing new candidates, a question I really like is "What's your favorite project you've worked on lately? What was interesting about it? Run into any tricky problems along the way? It can be anything: for work, school, a hobby project. Doesn't even need to software".

It slices through the bullshit fast. Either the person I'm interviewing is a passionate problem solver, and will be tripping over themselves to describe whatever oddball thing they've been working on, or they're either a charlatan or simply not cut out for the work. My sneaking suspicion is that we could achieve similar levels of success in hiring for entry level positions at my current company if we cut out literally the entirety of the rest of the interviews, asked that one question, and hired the first person to answer well.


We came up with some simple coding exercises (about 20 minutes total to implement, max) and asked candidates to submit their responses when applying. Turns out one of the questions regularly causes hallucinated APIs in LLM responses, so we've been able to weed out a large percentage of cheaters who didn't even bother to test the code before submitting.

The other part is that you can absolutely tell during a live interview when someone is using an LLM to answer.


This comment makes no sense. They're actively open sourcing the patent and trying to get it upstream into Postgres. They purchased another company to get this patent, and they're spending a lot of money on lawyers to figure out how to release it to the community.

Call out shady shit when companies do shady things, but the sentiment behind this comment seems to be looking for reasons to bee outraged instead of at what's actually being done.

If companies get evicerated every time they try to engage with the community they'll stop engaging. We should be celebrating when they do something positive, even if there are a few critiques (e.g. the license change call out is a good one). Instead, half the comments seem like they're quick reactions meant to stoke outage.

Please have some perspective - this action is a win for the community.


I stated I see the good intentions.

I am "owner" of a bunch of patents, too, and some have actually been proven their test of time by after years having been re-invented (better: "parallel-invented later in time") elsewhere in the open source world.

But in my value system one does not do press releases saying "HELLO! We have decided not to do something evil!".

They could have done the very same thing done quietly to make clear there is no hidden agenda.

"Look, we hold this trivial patent on the open source ecosystem. No no no, all will be fine. No, no, we will not pick up the phone should Broadcom call us one day."

Yay. \o/


Yeah, huge props here. There's a contingent on HN that seems to assume that almost any action by a company is done in bad faith. I dislike all of the shady stuff that happens, but that's why we should celebrate when companies are doing awesome things.

This is all positive. Super appreciate what you folks have done. It's clearly hard, well intentioned, and thoughtfully executed.


It really depends on what your use case is. E.g. of you're dealing with a lot of legacy integrations, dealing with all the edge cases can require a lot of code that you can't refactor away through cleverness.

Each integration is hopefully only a few thousand lines of code, but if you have 50 integrations you can easily break 100k loc just dealing with those. They just need to be encapsulated well so that the integration cruft is isolated from the core business logic, and they become relatively simple to reason about


GPT-5 is a bit better -particularly around consistency - and a fair amount cheaper. For all of my use cases, that's a huge win.

Products using AI powered days processing (a lot of what I use it for) don't need mind blowing new features. I just want it to be better at summarizing and instruction following, and I want it to be cheaper. GPT-5 seems to knock all of that out of the park


> GPT-5 is a bit better -particularly around consistency - and a fair amount cheaper. For all of my use cases, that's a huge win.

What is more or less a natural evolution of LLMs... The thing is, where are my benefits as a developer?

If for instance CoPilot charges 1 Premium request for Claude and 1 Premium request for GPT-5, despite that GPT-5 is (with resource usage), supposed to be on a level of GPT 4.1 (a free model). Then (from my point of view) there is no gain.

So far from coding point of view, Claude does coding (often) still better. I made the comparison that Claude feels like a Senior dev, with years of experience, where GPT 5 feels like a academic professor, that is too focus on analytic presentation.

So while its nice to see more competition in the market, i still rank (with Copilot):

Claude > Gemini > GPT5 ... big gap ... GPT4.1 (beast mode) > GPT 4.1

LLM's are following the same progression these days like GPUs, or CPU ... Big jumps at first, then things slow down, you get more power efficiency but only marginal jumps on improvements.

Where we will see benefits, is specialized LLMs, for instance, Anthropic doing a good job for creating a programmer focused LLM. But even those gates are starting to get challenged by Chinese (open source) models, step by step.

GPT5 simply follows a trend. And within a few months, Anthropic will release something probably not much of a improvement over 4.0 but cheaper. Probably better with tool usage. And then comes GPT5.1, 6 months later, and ...

GPT-5.0 in my opinion, for a company with the funding that openAI has, needed to be beat the competition with much more impact.


I'm not even considering the coding use case. It's been fine in cursor. I care about the days extraction and basic instruction following in my application - coding ability doesn't come into play.

For example, I want the model to be able to take a basic rule and identify what subset of given text fits into the rule. (E.g. find and extract all last names) 4o and 4.1 we're decent, but still left a lot to be desired. o4-mini was pretty good at not ambiguous cases. Getting a model that runs cheaper and is better at following instructions makes my product better and more profitable with a could lines of code change.

It's not emotionally revolutionary, but it hours a great sweet spot for a lot of business use cases


Super smart move. I hadn't heard of you folks before, but I'm interested in your product - open source and repairability are high on my list for home monitors. I'm lying in bed awake right now sir to an air quality issue, so it's top of mind.

The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: