Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, off the top of my head:

- Banning OpenClaw users (within their rights, of course, but bad optics)

- Banning 3rd party harnesses in general (ditto)

(claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked. Would be great to get some clarity on this. If I invoke it from my Telegram bot, is that an unauthorized 3rd party harness?)

- Lowering reasoning effort (and then showing up here saying "we'll try to make sure the most valuable customers get the non-gimped experience" (paraphrasing slightly xD))

- Massively reduced usage (apparently a bug?) The other day I got 21x more usage spend on the same task for Claude vs Codex.

- Noticed a very sharp drop in response length in the Claude app. Asked Claude about it and it mentioned several things in the system prompt related to reduced reasoning effort, keeping responses as brief as possible, etc.

It's all circumstantial but everything points towards "desperately trying to cut costs".

I love Claude and I won't be switching any time soon (though with the usage limits I'm increasingly using Codex for coding), but it's getting hard to recommend it to friends lately. I told a friend "it was the best option, until about two weeks ago..." Now it's up in the air.



> It's all circumstantial but everything points towards "desperately trying to cut costs".

I have been wondering if it's more geared at reducing resource usage, given that at the moment there's a known constraint on AI datacenter expansion capability. Perhaps they are struggling to meet demand?


It’s more that Anthropic knows that the models themselves are non-sticky, and the real moat is in the ecosystem around it.

It only makes sense for them to get users to use their ecosystem, rather than other tools.


See: Claude Cowork trying to establish an entire new group of people in their ecosystem.


And massive VM drive growth


> Perhaps Anthropic is struggling to meet demand?

Yes, definitely, they’re gracefully failing to meet demand. They could also deny new customers, but it would probably be bad for business.


I once decided to deny new customers in order to be able to service current demand at the quality we wanted. It backfired and made people want our product even more. Our phones were blowing up. That approach can have unintended consequences!


You unintentionally used a common sales tactic; by decreasing supply you increase demand.


Another knob you could have turned is: raise prices. Did you try this?


Anthropic is already doing this.

Signup prices seem higher now than three months ago.

This is actually the least frustrating method because people who can't afford to pay are not as angry as people who paid and aren't getting served (like when sign-in emails don't arrive for hours or days), or people who have paid for a long time to suddenly see quality decrease.

But it might not be best for business: Having more users than you can handle might suck, but if you're popular enough, people are still gonna put up with it.


Bad for business and probably unwise for the type of product people will pop their head in to check on, then stop paying and return much later to see whether it's still not much more than a parlor trick for them.


For my part, I've tried to help reduce their demand by cancelling my subscription.


I wish they would just rip the bandaid to stop everybody's entitled whining.

"We're sorry, what we were able to give you for $100/mo before now needs to be $200/mo (or more). We miscalculated/we were too generous/gave too much away for too little. It's a new technology, we are seeing a ton of demand, we are trying to run a business, hope you understand. If you don't want it, don't pay for it."


This is my take too, although I'm not prepared for a max400 reality to replace the max200, but... I hate all of the whingeing. Piggies at the buffet line seem to be the loudest on this subject.


I would understand the move, but boy would it play right into the "AI is only here to make the rich even richer" feeling wouldn't it?


If I strain really hard, I can come up with a reason why it might play into such a narrative.

/s


> "We're sorry, what we were able to give you for $100/mo before now needs to be $200/mo (or more). We miscalculated/we were too generous/gave too much away for too little. It's a new technology, we are seeing a ton of demand, we are trying to run a business, hope you understand. If you don't want it, don't pay for it."

Anthropic's thing has always been that they are perceived as slightly ahead of the competition, if they 2X their pricing then the competition that used to be "slightly worse" suddenly becomes an absolute bargain and guts their user base.


It is one thing to pay 100 a month to make calendar apps for your linkedin and birds on bicycles to get invited to talks, paying 200 HOWEVER


If we didn’t have the birds on bicycles, how would we know the models are getting better?


Are we at the point where there are external constraints that cash can't solve?


can't tell if you're being facetious but yes, there's not enough cash in the world to double energy/silicon fab capacity in a year. Infrastructure takes time, hardware is hard, and you have to be willing to bet that the demand will be there 5 years from now to make an investment today.


Until one has the entire supply of world GPU production, cash can solve it by out bidding others


TSMC would never allow all of their output to only one customer. You have an over simplified view of this.


One could always make existing infrastructure more efficient. Nothing better than post-mature optimization.


Just put everyone on pay per use with the API and rip the band aid off.


Even the pay per use is heavily VC subsidied at current prices.


All indications are that inference for API use is margin positive for Open AI and Anthropic not the subscription.

It will basically cut the hobbyist out and entrench large corporations that can pay the real costs.

If that happened and I was working for myself, I would just buy the beefiest computer I could finance and do everything locally.


Honestly, I wish they couldn’t subsidize with VC cash and such and offer below cost to begin with. Like I wish it were illegal. Basically this allows things like Uber, more or less putting taxis out of business and then being worse than what they replaced.

I’d like to see a lot more than entitled whining. I would like to see the fist of regulation slammed down on the back of these tech shenanigans where they know they’ll never be able to match the prices they’re starting with


I wish they would too. I’d respect them more for the transparency. I think everyone’s enshitiffication sensors have rightly been dialed up over the years. So without explanations for the regressions it just feels like another example


Claude -p is allowed. They're not going to give you a feature then ban you for using it.

What they changed is that it now uses extra usage, which is charged at api rates


"claude -p" does not charge api rates by itself, I just ran "claude -p 'write hello world to foo.txt'", and it didn't.

What they changed is that if you have OpenClaw run 'claude -p' for you, that gets your account banned or charged API rates, and if they think your usage of 'claude -p' is maybe OpenClaw, even if it's not, you get charged API rates or banned.

It seems so silly to me. They built a feature with one billing rate, and the feature is a bash command. If you have a bad program run the bash command, you get billed at a different rate, if you have a good script you wrote yourself run it, you're fine, but they have literally no legitimate way to tell the difference since either way it's just a command being run.

The justification going around is that OpenClaw usage is so heavy that it impacts the service for other people, but like OpenClaw was just using the "claude code max" plan, so if they can't handle the usage the plan promises, they should be changing the plan.

If they had instead said "Your claude code max plan, which has XX quota, will get charged API rates if you consistently use 50% of your quota. The quota is actually a lie, it's just the amount you can burst up to once or twice a week, but definitely not every day" and just banned everyone that used claude code a lot, I wouldn't be complaining as much, that'd be much more consistent.


It only switches to charging API rates if some part of your prompt triggers their magic string detector. Lot of examples of that floating around where swapping "is" for "are" or whatever will magically allow the request against your subscription plan again.


> (claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked. Would be great to get some clarity on this. If I invoke it from my Telegram bot, is that an unauthorized 3rd party harness?)

How often? Realistically, if you invoke it occasionally, for what's clearly an amount that's "reasonable personal use", then no you don't get nuked.


It’s the same problem people have with Google. If they ban you for some AI hallucinated reason you have no recourse other than going viral on Hacker News.


I haven't seen a single case of that happening with Anthropic yet. Every time someone has gotten banned it's because they either used third party harnesses which went to great lengths to impersonate claude code (obvious evasion), or because they set things up so it maxxed out their usage 24/7.

I'll change my mind when I see otherwise.

And this isn't being positive about Anthropic support or their treatment of users, as I too have seen lots of people here getting billed by them for stuff they never paid for, blatant fraud. That's even worse than Google. I'm only talking about getting banned for usage.


I plugged this question into Claude and told it to limit me to 10:

1. Cancer patient banned mid-paymenthttps://news.ycombinator.com/item?id=46675740

2. Hobbyist coder, VPN trigger, forms into void for 10+ monthshttps://news.ycombinator.com/item?id=47286867

3. "Reinstated" but still locked out — two systems out of synchttps://news.ycombinator.com/item?id=46007408

4. Banned for testing vision APIhttps://news.ycombinator.com/item?id=39988137

5. Banned on first ever prompt ("What do you know about Hacker News?")https://news.ycombinator.com/item?id=39698788

6. Mass banning wave, some banned before first usehttps://news.ycombinator.com/item?id=39672765

7. Entire company banned without warning, thousands of users strandedhttps://news.ycombinator.com/item?id=42210199

8. Forced new account (no email change support) → immediately bannedhttps://news.ycombinator.com/item?id=46339741

9. Banned for scaffolding a Claude.md file, support email never arriveshttps://news.ycombinator.com/item?id=46723384

10. $81 billing overcharge, human promised, month of silencehttps://news.ycombinator.com/item?id=47693679


Would've been nice if you'd read them as not a single one of them even mentions Agent SDK or claude -p usage, the topic of this thread.


I did read them but I interpreted the topic of this thread to be Anthropic's vague approach to compliance enforcement not specifically how claude -p is used and interpreted by Anthropic.


Huh I've noticed that! Opus hallucinates answers and provides contradictory reasoning to me much more regularly during the past couple of weeks than it did in February.

For example, after a question about a script that used taskset and CUDA_VISIBLE_DEVICES to handle NUMA domains on a parallel GLUE evaluation run:

>Round 2 tasks reuse GPU slots that Round 1 may still be occupying. Cola, SST-2, and QQP are small datasets and will likely finish before MNLI, but QNLI is sizable. If QNLI (GPU 3, slot 3) is still running when Round 2 launches, that's fine — they're on different slots. But more importantly, Round 2 uses slots 5/6/7 (GPUs 7/0/1), while Round 1 uses slots 0/1/2/3/4 (GPUs 4/5/2/3/6). So there's no GPU collision — you're good here actually. Disregard, this is clean.


They also screwed up the API token detection and also blocked a bunch of 1st party tool users for ~24h.

Support consisted of AI bots saying you did something stupid, you did something wrong, you were abusing the system, followed by (only when I asked for it explicitly) claiming to file a ticket with a human who will contact you later (and it either didn't happen or their ticket system is /dev/null).

(By the way this is the 2nd time I've been "please hold" gaslit by support LLMs this exact same way, the other being with Square)


claude -p not working would be instant unsubscribe downgrade from Max to Pro and further drive my use of codex. I use both but overall have noticed I reach for Claude less than codex lately because claude keeps getting slower and slower (I have not noticed a drop off in quality, but I use it less and less so maybe I'm not in a good position to notice).

Generally I find codex and claude make a good team. I'm not a heavy user, but I am currently Claude Max 5x and ChatGPT Plus. Now that OpenAI has a $100 offering and I am finding myself using Claude less, I am considering switching to Claude Pro and ChatGPT Pro x5. The work hours restriction on Claude Max x5 really pisses me off.

I am not a heavy user. Historically I only break over 50% weekly one week a month and average about 30-40% of Max x5 over the entire month. I went Max because of the weekly limits and to access the better models and because I felt I was getting value. I need an occasional burst of usage, not 24/7 slow compute. But even for pay-as-you-go burst usage Anthropic's API prices are insane vs Max.

I have yet to ever hit a limit on codex so it's not on my mind. And lately it seems like Claude is likely to be having a service interruption anyway. A big part of subscribing to Claude Max was to get away from how the usage limits on Pro were causing me to architect my life around 5hr windows. And now Anthropic has brought that all back with this don't use it before 2pm bullshit. I want things ready to go when the muses strike. I'm honestly questioning whether Anthropic wants anyone who isn't employed as a software engineer to use their kit.

Anyway for the last month or so codex "just works" and Claude has been an invitation for annoyances. There was a time when codex was quite a bit behind claude-code. They have been roughly equal (different strength and weaknesses) since at least February (for me).


I might consider switching to codex from claude pro 20x but I need the post tool use, pre file write and post user message hooks. Waiting on codex to deliver.

- pre file write -> block editing code files without a task and plan of work

- post tool use -> show next open checkbox in the task to the agent, like an instruction pointer

- post user message -> log all user messages for periodic review of intent alignment

These 3 hooks + plain md files make my claude harness.


I use codex through the pi agent. It’s wonderful and easy to create whatever extension or hook you want!

I’d use it with Claude too if they hadn’t banned it…


Why couldn’t you use Claude code harness with codex? The requests can be proxied to OpenAI.


I am cooking up an abstraction that enables these hooks on codex. Would love to have you kick the tires.


Try pi


For what it's worth. I invoked claude -p from a script, and my account was nuked immediately. DM'd Thariq from Anthropic who admitted it was a weird classifier and would look into it, but then he never followed up. Been 13 days since I've been banned now.

Very sad considering I got my whole company on Claude Code for them to just ban be like this, with no customer support response.


Anthropic has become shady as hell in less than a few weeks. The DoD Story and the overall popularity among developers got them a huge leap over OAI but i certainly won't renew my subscription with them. The Claude SDK feels like a constant fight against its own limitations compared to Codex and other Harnesses.


Why were third party harnesses banned? Surely they'd want sticking power over the ecosystem.


There’s the argument that Anthropic has built Claude Code to use the models efficiently, which the subscription pricing is based on.

Maybe there’s some truth to that, but then why haven’t OpenAI made the same move? I believe the main reason is platform control. Anthropic can’t survive as a pipeline for tokens, they need to build and control a platform, which means aggressively locking out everybody else building a platform.


Alternatively products like openclaw have an outsized impact on Anthropic's infrastructure for essentially no benefit to them. Especially when you're taking advantage of the $200 plan.

OpenAI has never shyed away from burning mountains of cash to try and capture a little more market share. They paid a billion dollars for a vibe coded mess just for the opportunity to associate themselves with the hype.


> Taking advantage of the $200 plan.

No, I'm paying $200 a month for a premium product that I expect premium service for. It's the single most expensive IT expense I have. Taking advantage my foot.


Can you imagine paying the actual cost of it, or a subscription cost that at least ballpark matched it? I don't think I have a single friend or acquaintance who realistically would.


You are simply a bit too entitled. It's not a premium product and honestly not that expensive in my opinion either (though that is going to depend on your location).

You are more than able to pay for API rates.


You may want to learn the difference between someone being able to pay API rates and someone willing to pay API rates. I'm sure many people on HN are able to pay API rates and almost all of them aren't willing to pay API rates. The providers know this hence why subscriptions exist. API is almost solely used by companies as almost no private person would be willing to pay that.


“You may want to learn” such choice way to introduce your position which is really not much of one.

If you are going to come and complain about a $200 subscription that gives you $400 worth of API tokens there is only so much room to complain. Only so many lemons can be squeezed. Hope that was a helpful for you.


A normal person pays $0-10 for an AI plan, maybe double that for a business.

$200 is premium.


It is not a premium service, it simply is buying you more tokens. Those $200 gives you at least $400 in API cost tokens.

Don't confused price with "premium service". It was not that long ago that folks would be spending $100-200 on their cable service bundle. You are buying a subsidized product when using the plan and the more you spend the more tokens you get, has nothing to do with being a premium service.


This is a messaging issue on their part, which I think is partially intentional.

It’s not unreasonable for people to expect the most expensive subscription plan to be “premium”. That’s how it works everywhere else. They typically have better margins on the premium plans, and the monthly payment gives them reliable cash flow at that higher margin.

You’re right that that’s not true at Anthropic (or really most AI providers). You’re not even really buying tokens because you get billed whether you use it or not, the tokens don’t carry over like buying API tokens, and they get to dictate what an acceptable way to use those tokens is. They are cheaper though, assuming you actually use them. Which Anthropic et al would really prefer you didn’t.


> It is not a premium service, it simply is buying you more tokens.

The cheap plans are usually semi-unlimited the same way but not as powerful. This isn't simply a matter of buying more tokens.

> It was not that long ago that folks would be spending $100-200 on their cable service bundle.

Compared to OTA that's premium, but more relevantly if most cable buyers are getting a hypothetical $10 bundle then the $100 one is a premium bundle.


Sorry still not sure what you’re going on about . The majority of LLM plans are simply a token purchase. The $200 account buys you nothing but tokens. It’s not a premium service, it’s simply more tokens. This is true for most of the companies out there.

The original comment was they are paying for a premium service. No they are paying for more tokens. You lot going on and on arguing over some small hill.


The lower tier openai and google plans don't have access to the same models. Where are you seeing popular plans that are simply token purchases?


I guess if you want to go that deep sure they sometimes offer early access, access to new agents/models but ultimately it’s a function of tokens. The selling point for most/all providers is x times the usage. You are upgrading for the token access.

Claude was the topic at hand and higher tiers buy you more tokens. I know some like Gemini bundle a ton of junk alongside the tokens but you really are still buying yourself more tokens. There is nothing premium in a $200 Claude account. You are buying more tokens, $100 is the same as $200 except token count. Hope that helps. ;)


> $100 is the same as $200 except token count

But I was making an argument about the $10 plans, not the $100 plans.

Claude doesn't even go that low. Except the free plan which has a very reduced feature list.

Claude's $20 and $100 are pretty similar except tokens, that part is true. So they're a bit higher priced and more of the "it's just tokens" model. But the market as a whole is mostly selling a limited feature set down at lower price points. On average, getting up to the point where you have full access and are paying per-token is itself a premium jump.


You are standing on top of an ant hill and I still don’t fully understand your position. The original post was about the premium service Anthropic plans. There is no such thing, you are simply paying for more tokens. Hope that helps.


Any $200/month AI plan is premium. Hope that helps.


I know why I typically don’t respond to your posts. So much said and I am still not sure your point. You have ignored the original point and gone off on a tangent.

It is not a premium service that deserves special care which was what the original commenter stated. It is a $200 account that buys you $400+ on tokens.

Hope that helps recenter this weird path we are following. :)


> gone off on a tangent

What? What I just said was my one and only point from the very beginning. The price is so much higher than the median that that makes it premium and deserving of some special care.

I understand your point of view here, and it's fine if you disagree with mine but it's weird if you don't at least understand my point by now. You saying my last comment is a tangent suggests you don't understand me. But it's a simple point and I'm not sure how to make it clearer.

Does that help recenter?


It's not a premium product. It's just expensive.


> They paid a billion dollars for a vibe coded mess just for the opportunity to associate themselves with the hype.

Lol no they didn't. It wasn't even an acquihire. They just hired Peter.

Maybe they are paying him incredibly well, but not a billion dollars well.


I think it's a training data thing. They can only gather valid training data from real human interactions, so they don't want to subsidize tokens for purely automated interactions.


> Why were third party harnesses banned? Surely they'd want sticking power over the ecosystem.

Third-party harnesses are the exact opposite of stickiness!

Ditching Claude Code for a third party harness while using the Claude Code subscription means it's trivial to switch to a different model when you {run out of credits | find a cheaper token provider | find a better model}.


Note that the thing that's banned is using third party harnesses with their subscription based pricing.

If you're paying normal API prices they'll happily let you use whatever harness you want.


To be clear they weren’t banned from Claude usage, they were required to use the API and API rates rather than Claude Max tokens.

Claude code uses a bunch if best practices to maximize cache hit rate. Third party harnesses are hit or miss, so often use a lot more tokens for the same task.


nah this doesn't explain it.

most of the users of those third party harnesses care just as much about hitting cache and getting more usage.


I'm watching a conference talk right now from 2 weeks ago: "I Hated Every Coding Agent So I Built My Own - Mario Zechner (Pi)", and in the middle he directly references this.

He demonstrates in the code that OpenCode aggressively trims context, by compacting on every turn, and pruning all tool calls from the context that occurred more than 40,000 tokens ago. Seems like it could be a good strategy to squeeze more out of the context window - but by editing the oldest context, it breaks the prompt cache for the entire conversation. There is effectively no caching happening at all.

https://youtu.be/Dli5slNaJu0


Sure. The question is whether they have the same level of expertise and prioritization that Anthropic does.


They are working with the same tools and knowledge like Anthropic does as Caching practices are documented. And they have as much incentive as Anthropic does to not waste compute. Can we stop acting like people who build harnesses be it Opencode oder Mario Zechners Pi are dumbfucks who don't understand caching?


but claude -p is still Claude Code


Was something using that been banned?


Yep, that's the reason for the new Extra Credit feature in Claude Code. Some people were wiring up "Claude -p" with OpenClaw, so now Anthropic detects if the system prompt contains the phrase OpenClaw, and bills from Extra Credit if that happens:

https://x.com/steipete/status/2040811558427648357

"Anthropic now blocks first-party harness use too

claude -p --append-system-prompt 'A personal assistant running inside OpenClaw.' 'is clawd here?'

→ 400 Third-party apps now draw from your extra usage, not your plan limits.

So yeah: bring your own coin "


https://xcancel.com/bcherny/status/2041035127430754686#m

> This is not intentional, likely an overactive abuse classifier. Looking, and working on clarifying the policy going forward.


One thing is lack of control of token efficiency on what’s already a subsidised product.

Another thing is branding: Their CLI might be the best right now, but tech debt says it won’t continue to be for very long.

By enforcing the CLI you enforce the brand value — you’re not just buying the engine.


Claude Code was the best harness from roughly around release to January this year. Ever since then, it's become more and more bloated with more and more stuff and seemingly no coherent plan or vision to it all other than "let's see what else that sounds cool we can cram in there."


What's taken over since then? Codex or something else?


Pi.dev


Maybe they should fix bugs like this then https://github.com/anthropics/claude-code/issues/17979#issue... ...


I want to differentiate 2 kinds of harnesses

1. openclaw like - using the LLM endpoint on subscription billing, different prompts than claude code

2. using claude cli with -p, in headless mode

The second runs through their code and prompts, just calls claude in non-interactive mode for subtasks. I feel especially put off by restricting the second kind. I need it to run judge agents to review plans and code.


> (claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked. Would be great to get some clarity on this. If I invoke it from my Telegram bot, is that an unauthorized 3rd party harness?)

100% this, I’ve posted the same sentiment here on HN. I hate the chilling effect of the bans and the lack of clarity on what is and is not allowed.


In this case, they handled things pretty well. You can still use openclaw etc with your regular Anthropic subscription, it will just count towards your extra credits / usage which you can buy for a 30% discount compared to API pricing. And they gave everyone one month’s value in credits.

I don’t think they could have done that much better I’d say.


That does not address joshstrange's concerns.

There is very poor clarity about what is and isn't allowed with the Claude SDK/claude -p. Are we allowed to use it to automate stuff? What kind of tasks is it permitted to be used for? What if you call your script 'OrangeClaw' and release that on GitHub? What if your script gets super popular, does it suddenly become against TOS?


This is exactly my point. At what point does it become a ToS violation? Right now it's a huge grey area and the idea of getting my account banned because I crossed an invisible line with zero recourse other than to switch providers is... frustrating.


It's pretty easy to read between the lines tbh. Personal, non-automated use is fine. Using it as a means to automate depleting your 5-hour limit 24/7 ("leftover usage") is not fine. They don't want to put in in the ToS because it's almost impossible because writing what I just said will still have people going "well what's automated, where's the exact line!" when it's all pretty clear what the intended use case here is. The Anthropic peeps have said about as much.

I get that the traditional dev is allergic to the concept of reading between the lines and demands everything to be spelled out explicitly, but maybe you should just see it as something to learn because it's an incredibly useful life skill.


Ok, let's say I'm not using it to deplete leftover usage, the task just happens to run down the 5 hour window usage.

Are you willing to bet your account over whether you've read between the lines correctly? Anthropic aren't going to listen to appeals.


> the task just happens to run down the 5 hour window usage.

In a single prompt? From zero usage? That doesn't "just happen".


When you're using the SDK, yes it can. Example: I used the Python SDK to translate a bunch of source code recently. I spawned a subagent for each module that needed translating and left it to run for a few hours with a parallelism limit of 5. It blasted through the 5 hour usage and dug into extra usage credits.

I have zero assurances that the above can't result in a ban. The usage pattern is not distinct from OpenClaw.


As I said, it doesn't just happen, you explicitly had to set it up so it could happen.


I'm confused about this comment.

The GP has described a task which feels like a task very well within intended usage of CC, but can easily eat up the usage limit.

What should we read between the lines about this scenario?

Is it a bannable offense?


Just in case it wasn't clear, what they described doesn't need extra tooling. You can write this in your CLI and it will easily cap a Max 20x plan in an hour: "we are converting this entire codebase from TS to C#. Following the guidelines I've written in MIGRATION.md, convert each file individually. Use up to 32 parallel subagents. Track your work for each file in a PROGRESS.md file, which you will update for each file starting and completing. Using an agent team, as a secondary step, add a verification layer where you verify each file individually for accurate migration following the instructions in VERIFICATION.md"

Yea there are other ways to do this, you can set up a separate harness sure to make it more efficient, but just the above will also work, it's just text you paste into your CC terminal, and it will absolutely cap the largest subscription plan available no problem.


That "non-automated" part is where I feel like there is a lack of clarity. They even have some stuff in to allow for scheduling in Claude Code. Seems similar to a cron but "non-automated" would rule out using a cron (right?). I'd love to feel comfortable setting up daily/hourly tasks for Claude Code but that feels iffy. Like I said, I don't think the line is clear.


The lack of clarity doesn't matter because they obviously can't tell if you ran a claude -p a few times today with usual prompts or whether your cron job did. It's impossible for them to reliably tell.

It can tell if your cron is running them every 10 minutes 24/7, because basic biology rules out you doing that for more than a day or so.


Wait, this is news to me. I thought 3rd party use of the sub was unequivocally prohibited?

If I'm understanding you correctly: they changed that policy, you can now use 3rd party software unofficially with the undocumented Claude Code endpoint, and their servers auto-detect this and charge you extra for it?

EDIT: Yeah, something like that?

> Starting April 4 at 12pm PT / 8pm BST, you’ll no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw. Instead, they’ll require extra usage.

https://news.ycombinator.com/item?id=47633568

This seems to mean that unauthorized usage of the sub endpoint is tolerated now (and billed as though it were the regular API). And possibly affects claude -p, though I don't know yet.


> If I'm understanding you correctly: they changed that policy, you can now use 3rd party software unofficially with the undocumented Claude Code endpoint, and their servers auto-detect this and charge you extra for it?

That’s correct. It’s more like a convenience technicality: you can use your sub account, but you’re paying extra. So it doesn’t really count towards your subscription in any way.

Subscriptions can buy extra credits against a 30% discount, though, so it’s a decent amount cheaper than actual API, but still prohibitively expensive.


One month’s value in credits does not equal the value of one month’s subscription. They could have done better.


Perhaps Anthropic should put a freeze on new signups until they can increase capacity. This is the best kind of problem for a business, I'm cheering for them.


If there is one thing that is crystal clear, its that LLM providers will always take your money, no matter how bad the service is.


This requires ethics.


I think we are about a month away from a class action lawsuit, at their revenue they are a juicy target. And god knows they got the entirely self inflicted unholy combination going on, marketing & sales that borders on fraud (X times the usage of plan Y which has Z times of free tier which has unknowable "magic tokens") and then of course the actual fraud, reducing usage in fifteen different non obvious non public ways.


I will say I have noticed none of these things in my enterprise account. Is this is a known targeting of non-enterprise clients only?


>> apparently a bug?

it's a bug only if they get a harsh public response, otherwise it becomes a feature


A bug for one side can be a feature for another


i dont know why ppl are surprised. you just need to see what they say on china, open source and fake safety blogs to understand they re not a company that devs should give their code for free to


> claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked

I've used it with a sub a lot. Concurrency of 40 writing descriptions of thousands of images, running for hours on sonnet.

I have a lot of complaints. I've cancelled my $200 subscription and when it runs out in a few days I'll have to find something else.

But claude -p is fine.

... Or it was 2 week ago. Who knows if they've silently throttled it by now?


The other day I read that letting another agent invoke claude -p was considered a violation (i.e. letting OpenClaw delegate to Claude Code).

Not sure how that's enforced though. I was in OpenClaw discord a while ago and enforcement seemed a bit random.

I'll try to find the source, I might have gotten the details mixed up.


It’s not a “violation” but they said it would be charged as extra usage.


This is a funny cat and mouse game. They offer a built in loop command.

Just tmux and use that.

Soon if they drop -p people will just vibe code in 5 minutes a way to type inside it remotely similar to their own built in remote access tool. Seems like a losing game from anthropics side


Most of those are issues are coming from a very small minority. A lot of times its good for businesses to focus on the customers that are driving them the highest margin, most likely not users like yourself.

1) Nobody should expect to use OpenClaw without API usage.

2) We have known for a long time that the plans are subsidized. It was not as big of a deal but now that demand has continued to explode at a multiple and tools like OpenClaw were creating a lot of usage from a small minority of customers, prices change.

Everything for me points more towards, we have made a service people really want to use and we are trying to balance a supply shortage (compute) with pricing. Nothing is stopping folks like yourself from simply paying the API rates. It is the simple no hassle way to get around any issue you are having, pay the API cost and you will have no limitations!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: