That may be true but there is definitely a difference between the way people talked about it earlier ("later we will all be paying with bitcoin") to how it is now ("it's a store of value and maybe there is potential for bitcoin-backed currencies").
I think the scaling limitations were well known at the beginning. If Bitcoin were to get as large as visa/mastercard the blockchain would be growing at a rate of a few gigabytes per day, which would kill decentralization.
As far as I was aware, this problem was mostly ignored.
It was expected (again, as stated in the whitepaper) that truncating transactions (in order to shrink blocks that are "old enough") would be able to manage block size well enough to stay reasonable.
In my experience (circa ~2011,) this expectation seemed to be generally accepted without much question by anyone talking about bitcoin. If it was acknowledged as a flaw it was usually hand-waved as only likely to be a problem 10+ in the the future.
edit: Or that a new network would learn from the bitcoin experiment and implement a protocol that works better at large scale.
the limitations have been researched extensively outside of the echo chamber of bitcoin development. It is possible to scale a utxo system like bitcoins to hundreds of millions of txs per day. Xthinner[1] can compress blocksizes by 99%, but bitcoin devs have ignored this with handywavy arguments.
A bitcoiner pinged me and asked for my comment here. It really sucks that people are so easily bamboozled by dishonest scammers.
In the original bitcoin software a node would receive every transaction made while it was online twice: once when the transaction was first relayed, once when it was placed into blocks. This was obviously wasteful, so we created and deployed a reconciliation scheme that exploits the fact that normally all, or almost all the included transactions are already known. https://github.com/bitcoin/bips/blob/master/bip-0152.mediawi...
But because Bitcoin developers are not dishonest scammers they didn't run around putting out (no kidding) press releases claiming "98.6% compression"-- though that's what you get if you compare the size of the BIP152 message to the size of the block. In reality, since it depends on the transaction being known in advance the unachievable limit for this class of approaches is a 50% bandwidth reduction for a node compared to the original behaviour. BIP152 achieves 49.3% out of that 50%, as measured on the latest block.
Even before compact blocks was created back in December 2015, we knew even smaller could be achieved. E.g. we published a scheme that requires asymptotically 0 bytes per transaction, only requiring data proportional to size of the difference between the block and the recipients guess at the next block. But what we found is that the simpler scheme actually propagated blocks much faster because once the block is down to just a few thousand bytes other factors (like CPU time) dominate. Expending a lot of additional code and cpu time to take 49.3% closer to 50% isn't a win in actual usage.
[And for considerations other than block propagation, saving a few extra bytes per block is extremely irrelevant.]
It's also the case that some of these dishonestly hyped supposed improvements beyond what Bitcoin has done for years are actually totally brain-damaged and wouldn't work in practice because they're not robust against attack-- but there isn't much reason to dive into technical minutia because what they _claim to achieve_, once you strip off the dishonest marketing, isn't all that interesting.
That page claims it can compress a single transaction down to 12-16 bits. Unless the vast majority of btc transactions are between the same few wallet addresses, this seems impossible? Even if you assume that the transaction is an instance of a common known script, you still need from-address, to-address, and amount, all of which are >16 bit quantities and in general are cryptographically random.
The only explanation I can think of is that they are relying on a sidechannel to communicate the actual transactions, which makes sense in the miner case (the utxo pool) but not in the general node case.
Beyond that, I run a BTC node occasionally and the bottleneck is validating blocks, not downloading them. Transactions are complicated enough right now that I'm only able to catch up at about 350x real-time (that is, it takes around a full cpu-day to validate a year of blocks/transactions).
>but bitcoin devs have ignored this with handywavy arguments.
Can you provide links to these discussions? I searched around on google and all the results are relating to bitcoin cash. I also searched the usual places that bitcoin (non cash) people congregate and turned up nothing.
Not this again. The entire bitcoin blockchain fits in $6 of hard drive space. The average transaction right now is more than $11.50. The AVERAGE transaction costs almost double what it costs to store the ENTIRE blockchain.
Stop with the storage space nonsense. The only people even storing the entire chain are enthusiasts, servers and miners. Saying "what if it gets a thousand times as many transactions" is ridiculous, but it still wouldn't be a problem. A few gigabytes a day? A $300 dollar hard drive would still take a decade to fill up. I think the few that sync the entire chain handle that.
It doesn't google anymore and might be gone now, but back in the day there was an article on a bitcoin website which went through some math, arguing that Bitcoin could achieve 4000 transactions per second. People used to link to it on a regular basis.
Aside from that, people figured Moore's Law would continue at its historical pace, and Bitcoin could grow indefinitely at the same pace.