GitHub Copilot is moving to usage-based billing
github.blog671 points by frizlab 18 hours ago
671 points by frizlab 18 hours ago
Something is hilariously off here: Why should I pay $10 and be forced to use it by the end of the month, while I can pay $10 and have it last as long as I want?
Their "API pricing" is exactly the same as that of providers: https://docs.github.com/en/copilot/reference/copilot-billing...
I'm thinking the same. Downgrade to Pro and use OpenRouter (same price) for overage.
Seems a massive loss for Microsoft. Presumably there's a further rugpull to come.
> Presumably there's a further rugpull to come.
How would that be? They are already charging as much as the underlying providers. They can hardly expect to have any customers if they are charging more.
Enterprise sales will be the answer. Microsoft will have some story that convinces an exec eight levels up the org chart from the normal users that this is an essential product they need to overpay for. Given their existing relationshipsand immense sales team they'll probably have success.
That story is data governance. Corporate already have a data-agreement with MS, storing all their data there. Github copilot is covered by that, while a individual agreement with e.g. anthropic needs lawers involved.
It’s precisely this, and, to be fair, it’s a rational approach given a Data Security Exhibit starts at 6 weeks and can hit 6 months to complete. That being said, I work with regulated data, so YMMV.
Plus if you're in government you have procurement to deal with. You already have an Enterprise deal with Microsoft so you don't have to go through any of that rigmarole.
Microsoft is simply the default answer for most large corporations. Getting access to some Microsoft subscription is very easy, because of the existing framework agreements, Microsoft providing any and all compliance slopuments needed and already being pre-cleared for corporate data etc. Meanwhile trying to use another provider (e.g. Anthropic) would be a one year endeavor, minimum.
We also pay $300/month for Azure Desktop VMs.
We are paying for tens of thousands of those machines, although everyone knows they are stupidly expensive and incredibly slow.
They list the price 900% higher and give a 90% discount to enterprises who also use teams, outlook, office or even windows if they're desperate. Then that becomes a deal so good that enterprises can't afford not to take it!
I'm already on Pro. Why should I keep it?
e.g. if on an annual plan? 0x will be gone, but there are okay 1x and 0.3x models left. I am pretty much curious how the early may test invoicing will look like. current setup of tools etc. is way too chatty eats up 1+M token per PRU easily. not sure how much is cached.
Only reason to keep it is if you like their UX and auto-complete. Everything else is on pay per use and if you don't use all of it (good luck with the 5 hour and week caps) you have just paid more for the auto-complete
The deal is really pretty much garbage now and I believe that is the intent.
I use auto-complete mostly, so I'm somewhat relieved. When I do need to use the agent, I don't think I will use all of the tokens.
$10 a month for auto-complete on a good UX is good value IMHO.
private repos
Can you elaborate please? I still clone my private repos and work on them using OpenRouter Credits and OpenCode (or Copilot itself: It supports OpenRouter BYOK). No?
I have to wonder if it's because of how many Enterprise customers they have who have standardized on Github Copilot and gotten it through the gauntlet of legal approvals etc.
This rings true to me as someone who's worked at a few large corps like this. A price hike does not change things when there is a mandate to use MS products over other vendors.
That is exactly where I am.
We're putting other providers through the gauntlet. An M4 Studio or two running the latest Qwen3 or whatever counts for state of the art in open models is also looking a little more viable all the time.
Bingo. Ghcp is the only allowed llm solution at our big well known semiconductor corp. It'll take years to get approval for anything else. We're stuck with it and will pay whatever price we have to.
I'm wondering if they're basically saying they're going to give $10/month free API credits to students and open source maintainers and so on... while otherwise getting out of the consumer portion of this space.
they're downsizing free github copilot pro for open source maintainers. At the very least, it looks like small open source projects got their free copilot pro cut off
I was kinda of disappointed when that happened earlier this month, but not as much now after seeing this change. My primary use had been trying out some of the newer Anthropic and OpenAI models, which probably would have burned through $10 worth of credits rather quickly given their new pricing.
for my experience currently, I greatly prefer the VSCode Copilot extension experience over the Claude Extension
I think VSCode only supports copilot for "autocomplete" too
on top of that, you need GitHub Copilot for the PR reviewer functionality in GitHub
Huh, I find my copilot plugin to be so incredibly glitchy. My agents are always reporting that their shells are mangled that commands are truncated and all kinds of nonsense. Sometimes they spin up dev servers fine other times it just hangs waiting for a terminal response. So far I have found relying on the CLI from the model providers to be significantly more reliable.
I do like the integrations with the IDE however, they are convenient for rapidly reviewing changes. I just need their terminals to actually work!
I had this problem and it turns out it was my oh-my-posh command prompt customization. VS Code injects certain control characters into the output stream for agents to observe events and the theming runs after those mechanics are hooked up so it can interfere. Updating to the latest oh-my-posh fixed it for me.
Here's the oh-my-posh GH issue[0] in case your problem is similar but not solvable with a simple package update.
[0]: https://github.com/JanDeDobbeleer/oh-my-posh/issues/7029
I have the same problem with zsh + powerlevel10k
but surely the issue is on VS Code side, to do things in a way that work with people's shells as they are
other agent harnesses don't have the same problems with my shell
You can use Copilot extension with OpenRouter (among others).
And yes, I need to find a solution for autocomplete. It used to be available in free tier of Copilot. Not sure anymore.
Enterprise gets pooled credits and will like having everything go through one place so I think it still works.
You can pool credits through open router (afaik, I'm only using a single user account), but if you top-up $10 per user, per month, any unused credits will rollover.
Tbh I think it still works, but only because the new allowance will likely get used very quickly within a billing cycle - I'm expecting this change to increase our orgs bill significantly based on how many API credits with open router I consume in a weekend using a single agent in a pairing style.
The pooling will only be useful if you have a bunch of infrequent/low usage users that you still want to have licenses.
Which is almost guaranteed to be the case for a large org, considering everyone will want auto complete and PR reviews, but on average most will not be making a ton of agent use
Is there a way to use the autocomplete feature with an api?
No but autocomplete is not part of this billing change
Is that autocomplete better than IntelliJ own plus their local only LLM completion?
I uninstalled copilot plugin because it was eating memory and its completions where about 60% good and the rest was bad.
After switching back to IntelliJ I see just positives.
The era of subsidised inference is truly ending. The new model multipliers (https://docs.github.com/en/copilot/reference/copilot-billing...) seem like a huge leap, though. From 1x to 6x for new-ish GPT and Sonnet models. 27x for Opus...
Seems like folks would be better off with OpenRouter instead.
Lots of us have noticed that usage limits for Claude have been nerfed in recent weeks/months.
If anything, these new multipliers are more transparent than anything OpenAI or Anthropic have communicated regarding actual costs and give us a more realistic understanding of what it's costing these providers.
The fact that we were able to get such a substantial amount of usage for $20/$100/$200 a month was never meant to last and to think otherwise was perhaps a bit naive.
This feels like a strategy from the ZIRP era of tech growth where companies burned investor capital and gave away their products and services for free (or subsidized them heavily) in order to prioritize user acquisition initially. Then once they'd gained enough traction and stickiness they'd then implement a monetization strategy to capitalize on said user base.
However, inference costs for entirely good enough models are likely to keep declining in the future. We're probably hitting diminishing returns on model size and training. The new generations aren't quantum leaps anymore, and newer generations of open source models like DeepSeek are likely to start getting good enough.
There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.
Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]
> because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI-
Not sure how it all works out. Currently trillion dollar companies can't make a native app for platforms. Everything is just JS/Electron because economics does not work for them.
And here companies can make GW data center running very expensive GPUs for 1/10th of current prices. Sound little fanciful to me.
The price you pay for anthropic must include the price of training new and better models which is incredibly costly. If you use the models someone else already spend money to develop you don’t need to pay this price.
I guess the new models will still be quantum leaps, but literally: "The smallest possible change in a system"
They've been like that for a while actually, I think at least since the big hype around ChatGPT 4.5 (or was it 5?) and that underwhelming, lukewarm, oversanitised presentation by Altman and his team.
Yups... Mythos is the smallest possible leap. Not a standard model generation advance, not even a version point advance. Just the smallest possible quanta of a change. We are absolutely hitting a plateau any day now. Any day. Any time. Any second now. Yup. Right now! Surely!
I mean let's be realistic - all that we know about the "mythical" Mythos is the carefully curated and release stuff by the Anthropic's PR team. Is it really a huge leap they are making it to be? I doubt it. In fact I bet if it was indeed that powerful and dangerous, as they imply, they'd find a way to release it immediately, devastate OpenAI and DeepSeek and secure a leading position in the market. Why is it not happening? I suspect because Dario is again at it, peddling his bullshit.
Yeah. AI progress is insanely fast if you compare it to anything else. Where else is a one year old technology already hopelessly outdated? 10 years ago is basically stone age.
I am continually tripped out by the fact when I was 16, I didn't have a 'smartphone' beyond a Windows Mobile 6 phone that had no internet on it.
Now, I have this high-resolution shiny object that can near instantaneously get any information I want along with _streaming HD video to it_ *anywhere*.
15 years even feels like a stone age. I can't fathom what it has to feel like people in their 60s and 70s.
I'm not quite 60, but it's always interesting to me that I feel quite the opposite of this. When I was 16, I didn't have a computer, didn't have a phone, had never used the Internet, but when I think of how life has changed, it's frankly not much. I woke up this morning, scooped my cats' litter boxes, took out some trash, made myself breakfast, ate that, read some news while eating, then lifted weights in my garage, had some work meetings, wrote up some instructions per a customer request from Friday, and am about to go drive to the lake to go do a 9 mile longboard loop.
That's very close to a normal day in 1996. The biggest difference is I read the news on my phone instead of a physical newspaper. The news was not any more interesting or informative because of that. I guess I can also still do the loop reasonably well, but I'm a lot slower than I was in 1996 when I was a cross-country state champion.
My parents are closing in on 70 and I guess I can't speak for them, but I'm at least aware of the daily routines of their lives, too. Walk the dog, do housework, DIY building projects, visit kids and grankids. Seems much the same, too, with the biggest difference being they're now teaching my sister's sons to play baseball rather than me, but shit, one of her sons even looks like exactly the same way I looked when I was 7! The more things change, the more they stay the same.
General agree... I still do the things (mid-50's) I used to do when I was a teenager with no computer, no phone.
But - now they are easier - I can read books on an e-ink screen and pretty much instantly find what I want to read next. I get my news on a phone. I used to watch TV/movies broadcast or on tape rentals. Now, I have just about everything I could ever want available - without ADs... those were such a time-waster.
What has changed is that I have access to MORE information than my local (or school) libraries could ever provide - in a variety of more accessible formats. Whatever tools I need to get "work done", I can find a myriad of free and open-source options.
But - the overall days and household family routines are the same - now, instead of reading a paper book while waiting to pickup my kids (or other family members) "back-in-the-day", I can read my device, or connect with my DIY communities online on my phone - or learn something new. I don't have to schedule life around major broadcast events, I can easily do many tasks while I am "out-and-about".
Friction has been reduced.
If your parents are closing in on 70, I would have expected you to be closer to not quite 50 than not quite 60.
I am just over 50 myself and I agree with your points. Technology has changed but life is largely very similar to wear it was in the 90s. At least day to day. Attitudes are way worse now.
Thank you for this insight!
I always wonder the views of older people. My parents are very technology forward and have been my entire life so it is difficult to gauge how different life is compared to when they were growing up.
It's easy to hear "Oh well I only had 640kb of memory and typed programs out of a magazine I got in the mail!" and see as distinct from having 'unlimited' resources and the internet.
Your insight is good ("The biggest difference is I read the news on my phone instead of a physical newspaper") that life sort of stays the same but the modality changes. People still go to the store like they did in the mid-1800s but now it is by car.
I wonder what our "industrial revolution" will be where the previous generation lived (ie: out in the country on a farm) totally different lives to the current (ie: in the city in a factory). Maybe when space travel and multi-planetary living is normalized?