Talk:Scalability
Response to Dan Kaminskys presentation
Dans slides were presented at Blackhat 2011.
Response from Mike Hearn (original author)
Mike responded in this forum thread.
Other discussion
There is one major point that the page overlooks: the limitation on block creation.
Block creation is limited to an average of one block every ten minutes. Furthermore, block size (which includes the transactions in the block) is limited to 1,000,000 bytes.
Each transaction requires 10 bytes, plus approximately 106 bytes for every input and approximately 69 bytes for every output. The exact size depends on the size of the public key, which I have not been able to confirm, but the keys in my wallet.dat seem to be about 65 bytes each.
If we assume that transactions average two inputs and two outputs, then the average transaction size will be about 350 bytes (note that the main page assumes an average of 1KB per transaction). If we further assume that the block size will, in practice, be limited to 500,000 bytes because the transaction fees increase as the block size increases, then that means there will be, on average, approximately 1430 transactions per block. That works out to an average of 2.5 transactions per second - well below the stated goal of at least 4,000 transactions per second.
Even if we assume only one input and one output per transaction, and that each block will contain the full 1,000,000 bytes, that still works out to only 5,405 transactions per block, or 9 transactions per second.
Unfortunately, this is not a limitation that can be overcome by simply increasing memory, or switching to a different ISP with more bandwidth. It is a built-in limitation, designed to deliberately slow down block creation. One solution is to somehow allow blocks to be freely created, while still keeping the rate of coin creation constant.
The bottom line is that, as it sits, this system is not scalable.
- MAX_BLOCK_SIZE has always been planned to increase as needed. That limitation should be ignored. theymos 17:15, 4 March 2011 (GMT)
- What Theymos said. Increasing MAX_BLOCK_SIZE will be done when "lightweight, header-only" client mode is done. Until then, block size has to be kept under control.--Gavin Andresen 00:19, 5 March 2011 (GMT)
- I've updated the page with more discussion of this topic. --Mike March 5 2011
The thing with VISA or and credit card company is that there wouldn't be that many actual transactions. When I buy stuff with my credit card the vender doesn't get paid instantly. I pay my bill once a month and the vender gets all his transactions lumped into one payment from VISA (once a week, I think). Someone correct me if I'm wrong, but the number of real transfers of money would be much smaller. --Randomproof 17:29, 1 April 2011 (GMT)
Would also be nice to get some kind of an estimate on how much the crypto-operations could be accelerated with a GPU. Jojkaart 20:48, 13 June 2011 (GMT)
Shouldn't this (under Opimizations -> Network Structure): "Switching to DNS would give dramatically faster startup times that do not scale with the size of the network." read: "Switching to DNS would give dramatically faster startup times that do scale with the size of the network."? ie Remove the "not". --Tokoin 09:24, 21 July 2011 (GMT)
Moore's law
I think Moore's law should be mentioned.
Even though now the comparison to Visa produces requirements for processing power, data storage or network capacity appear overwhelming for a casual user, the cost/performance ratio decreases all the time. My calculations based on data from wikipedia projected for 2020:
Disk space
Wikipedia quotes a research article that estimates that "in 2020 a two-platter, 2.5-inch disk drive will be capable of storing more than 14 terabytes(TB) and will cost about $40". This can hold about 85 days of data (~3 months) at "visa-speed". This is a bit pricey, but within the scope of a normal user, and definitely enough for enthusiasts.
Network capacity
Wikipedia references articles that estimate the "moore's law" equivalent for network to be 50% increase / year. For 9 years (2011-2020), this makes an increase of approximately 38 times. In order to achieve the speed of 1GB/s, this is the equivalent of someone having 27MB/s at the moment, i.e. 215Mbit/s. I think this exceeds a normal user and most enthusiasts. Assuming a casual user has 2Mbit/s at the moment, he'll scale to "visa-speed" in about 16 years, i.e. 2027.
CPU performance
Moore's law is approximated as "doubling every 18 months". For 9 years this makes 34 times if I'm calculating right. 50/34 = 1.47 cores. A dual-core computer in 2020 should therefore be sufficient to handle relaying/verifying Bitcoin transactions at visa-speed. I'm ignoring mining. So why someone is proposing that double tx check should be removed from the protocol ? We would surely leave the door open to some nasty attacks if strange transactions or double spends slip into blocks from attacking miners.
Summary
I only made very rough estimates. Feel free to recalculate with more accurate data. My conclusion is that CPU would be a no-brainer, disk space would be an annoyance, however network speed would indeed appear to be a problem.
satoshiDICE
I believe the implications of SatoshiDice and similar schemes that stress the network should be mentioned, since as of March 2013, SatoshiDice transactions occupy 80% of the block chain. --Alvox (talk) 23:09, 29 March 2013 (GMT)
- So mention them? Note that SatoshiDice just abuses the network, not so much merely stressing it.. but clarifications can always be edited in after we have something to start from. --Luke-jr (talk) 23:28, 29 March 2013 (GMT)
- Is SatoshiDice relevant to scalability? If there is a solution to SatoshiDice spam then we probably should mention it, but I don't know of any solutions that can't be worked around by them --Lapp0 (talk) 17:50, 20 December 2014 (UTC)
passively running a full node and other scalability options
This page makes no distinction between passively running a full node and actively using 90% of your CPU power, 90% of your bandwidth and a ton of disk space. Clearly if it takes significant effort to run a full node, arguably unlike right now, then the number of full nodes will drop significantly.
I propose collapsing this whole article into one section titled "Increasing the Block size (Hardfork)" and include the problems with hardforking along with the problems of expensive full nodes.
Along with this section there should be other sections on other scalability improvements and their problems including sidechains, opentransactions, and others. --Lapp0 (talk) 17:47, 20 December 2014 (UTC)
Removal of the word ripple
I originally added the mention of ripple back in 2011. At the time that was before OpenCoin (now called Ripple Labs) purchased the ripple name and stuck it on an entirely unrelated centralized system. Instead, what ripple referred to back then was a network of pairwise trust that allowed payments to ripple from person to person very similar to how Lightning actually works except ripple depended on trust while lightning is made cryptographically secure with smart contracting.
When opencoin bought the name and applied it to something different I protested against the ethics of that move and went through and edited all my comments referring to ripple, where they could be edited, but the text here had been removed by Hearn's edit-warring at the time so it didn't get fixed. I noticed today that Furunodo rightfully added an edit a few months back to indicate that Ripple's decentralization was bullshit-- it was right to change the text, but his change was the wrong one: The text was written long before centralised-bullshit-"ripple" was created in 2013. When it was written it referred to a system which we'd now recognize as a primordial lightning. Had the text been in the article in 2013 I would have fixed it then (and probably said "payment channels"), but it wasn't. I've corrected it now. Cheers. --Gmaxwell (talk) 03:59, 29 December 2020 (UTC)