Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you for this, very much appreciate the thoughtful response.

The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.



Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.

I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.

If you have an opinion about that, everyone here would love to hear about it.


UPDATE: Well-regarded people on HN are saying OpenAI's most recent GPT-5x codex model is better than Claude 5x for certain coding tasks:

https://news.ycombinator.com/item?id=47707494


at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.


Wild how different experience people can have. Both Google's models and Anthrophic's hallucinate a lot for me, even when I try the expensive plans and with web searches, for some reason, and none of them come close to the accuracy and hallucination-free responses of ChatGPT Pro, which to me still is SOTA and has been since it was made available. But people keep having opposite experiences apparently, I just can't make sense of it.


Kagi (assistant.kagi.com) with Kimi K2.5 (their current default) has worked great for me in scenarios where the search result data is more important than the model.

I.e. what I used to use Google for and when I don't want an AI to overly summarize / editorialize result data.


oh thats probably because im a cheap-skate and just use the free garbo models. im sure the pro version is quite good.


My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.

As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.


How would fraud help here? Don't they just need scale of lots of customers paying a little bit? How do you fraud your way into that?


they don't need customers, when the customers ere each others companies for example the deals openAI nvidia oracle made


That's not fraud, and it's not sustainable. They aren't going to just keep doing that. It only makes sense if an AI company wants to pay for GPUs with stock, and - more importantly - the GPU company agrees to sell in exchange for stock.


s/fraud/corrupt, illegal $something.

If you're picking on my vocabulary, that's fair. Fraud wasn't the point, I think you're smart enough to realize that.


I appreciate the implication that either you're right or I'm stupid, but maybe you should write the comment you meant to write.

Trading shares for GPUs is not corrupt either.


Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?


I didn't asking him to evaluate them. I asked him how customer and partners perceive them.

He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.

I'm curious.


Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".


If you were in charge of the deciding what should be done with Sam Altman, what would you choose?


I mean, its a fair question, though it does make some wonder how extreme the answers could be, so I could see why you're being downvoted.

The problem is sometimes on paper everything people like Sam Altman do is legal, despite it harming so many. We've literally had a major RAM producer pull off the consumer RAM market. I feel like Sam Altman should be investigated and heavily scrutinized. He kind of is the biggest bubble in the AI bubble, we're letting him fester too far into it too, and these circular deals have seemingly somewhat stopped for now, but it might only get worse.


Totally. Lying about others can be so harmful. But lying to hostiles in order to protect? Acceptable.

I guess my question was more, if the article author was the judge of fate or morality, what should happen?

As to AI and Sam, I think it’s too early to tell what effects will be. So we should adopt non judgement, build good ourselves and see what unfolds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: