— A deep dive into threshold signature without mathematics by ARPA’s cryptographer Dr. Alex Susubmitted by arpaofficial to u/arpaofficial [link] [comments]
Threshold signature is a distributed multi-party signature protocol that includes distributed key generation, signature, and verification algorithms.
In recent years, with the rapid development of blockchain technology, signature algorithms have gained widespread attention in both academic research and real-world applications. Its properties like security, practicability, scalability, and decentralization of signature are pored through.
Due to the fact that blockchain and signature are closely connected, the development of signature algorithms and the introduction of new signature paradigms will directly affect the characteristics and efficiency of blockchain networks.
In addition, institutional and personal account key management requirements stimulated by distributed ledgers have also spawned many wallet applications, and this change has also affected traditional enterprises. No matter in the blockchain or traditional financial institutions, the threshold signature scheme can bring security and privacy improvement in various scenarios. As an emerging technology, threshold signatures are still under academic research and discussions, among which there are unverified security risks and practical problems.
This article will start from the technical rationale and discuss about cryptography and blockchain. Then we will compare multi-party computation and threshold signature before discussing the pros and cons of different paradigms of signature. In the end, there will be a list of use cases of threshold signature. So that, the reader may quickly learn about the threshold signature.
I. Cryptography in Daily Life
Before introducing threshold signatures, let’s get a general understanding of cryptography. How does cryptography protect digital information? How to create an identity in the digital world? At the very beginning, people want secure storage and transmission. After one creates a key, he can use symmetric encryption to store secrets. If two people have the same key, they can achieve secure transmission between them. Like, the king encrypts a command and the general decrypts it with the corresponding key.
But when two people do not have a safe channel to use, how can they create a shared key? So, the key exchange protocol came into being. Analogously, if the king issues an order to all the people in the digital world, how can everyone proves that the sentence originated from the king? As such, the digital signature protocol was invented. Both protocols are based on public key cryptography, or asymmetric cryptographic algorithms.
“Tiger Rune” is a troop deployment tool used by ancient emperor’s, made of bronze or gold tokens in the shape of a tiger, split in half, half of which is given to the general and the other half is saved by the emperor. Only when two tiger amulets are combined and used at the same time, will the amulet holder get the right to dispatch troops.
Symmetric and asymmetric encryption constitute the main components of modern cryptography. They both have three fixed parts: key generation, encryption, and decryption. Here, we focus on digital signature protocols. The key generation process generates a pair of associated keys: the public key and the private key. The public key is open to everyone, and the private key represents the identity and is only revealed to the owner. Whoever owns the private key has the identity represented by the key. The encryption algorithm, or signature algorithm, takes the private key as input and generate a signature on a piece of information. The decryption algorithm, or signature verification algorithm, uses public keys to verify the validity of the signature and the correctness of the information.
II. Signature in the Blockchain
Looking back on blockchain, it uses consensus algorithm to construct distributed books, and signature provides identity information for blockchain. All the transaction information on the blockchain is identified by the signature of the transaction initiator. The blockchain can verify the signature according to specific rules to check the transaction validity, all thanks to the immutability and verifiability of the signature.
For cryptography, the blockchain is more than using signature protocol, or that the consensus algorithm based on Proof-of-Work uses a hash function. Blockchain builds an infrastructure layer of consensus and transaction through. On top of that, the novel cryptographic protocols such as secure multi-party computation, zero-knowledge proof, homomorphic encryption thrives. For example, secure multi-party computation, which is naturally adapted to distributed networks, can build secure data transfer and machine learning platforms on the blockchain. The special nature of zero-knowledge proof provides feasibility for verifiable anonymous transactions. The combination of these cutting-edge cryptographic protocols and blockchain technology will drive the development of the digital world in the next decade, leading to secure data sharing, privacy protection, or more applications now unimaginable.
III. Secure Multi-party Computation and Threshold Signature
After introducing how digital signature protocol affects our lives, and how to help the blockchain build identities and record transactions, we will mention secure multi-party computation (MPC), from where we can see how threshold signatures achieve decentralization. For more about MPC, please refer to our previous posts which detailed the technical background and application scenarios.
MPC, by definition, is a secure computation that several participants jointly execute. Security here means that, in one computation, all participants provide their own private input, and can obtain results from the calculation. It is not possible to get any private information entered by other parties. In 1982, when Prof. Yao proposed the concept of MPC, he gave an example called the “Millionaires Problem” — two millionaires who want to know who is richer than the other without telling the true amount of assets. Specifically, the secure multiparty computation would care about the following properties:
IV. Single Signature, Multi-Signature and Threshold Signature
Besides the threshold signature, what other methods can we choose?
Bitcoin at the beginning, uses single signature which allocates each account with one private key. The message signed by this key is considered legitimate. Later, in order to avoid single point of failure, or introduce account management by multiple people, Bitcoin provides a multi-signature function. Multi-signature can be simply understood as each account owner signs successively and post all signatures to the chain. Then signatures are verified in order on the chain. When certain conditions are met, the transaction is legitimate. This method achieves a multiple private keys control purpose.
So, what’s the difference between multi-signature and threshold signature?
Several constraints of multi-signature are:
As for multiple signatures or threshold signature, the master private key has never been reconstructed, even if it is in memory or cache. this short-term reconstruction is not tolerable for vital accounts.
Just like other secure multi-party computation protocols, the introduction of other participants makes security model different with traditional point-to-point encrypted transmission. The problem of conspiracy and malicious participants were not taken into account in algorithms before. The behavior of physical entities cannot be restricted, and perpetrators are introduced into participating groups.
Therefore, multi-party cryptographic protocols cannot obtain the security strength as before. Effort is needed to develop threshold signature applications, integrate existing infrastructure, and test the true strength of threshold signature scheme.
1. Key Management
The use of threshold signature in key management system can achieve a more flexible administration, such as ARPA’s enterprise key management API. One can use the access structure to design authorization pattern for users with different priorities. In addition, for the entry of new entities, the threshold signature can quickly refresh the key. This operation can also be performed periodically to level up the difficulty of hacking multiple private keys at the same time. Finally, for the verifier, the threshold signature is not different from the traditional signature, so it is compatible with old equipments and reduces the update cost. ARPA enterprise key management modules already support Elliptic Curve Digital Signature Scheme secp256k1 and ed25519 parameters. In the future, it will be compatible with more parameters.
2. Crypto Wallet
Wallets based on threshold signature are more secure because the private key doesn’t need to be rebuilt. Also, without all signatures posted publicly, anonymity can be achieved. Compared to the multi-signature, threshold signature needs less transaction fees. Similar to key management applications, the administration of digital asset accounts can also be more flexible. Furthermore, threshold signature wallet can support various blockchains that do not natively support multi-signature, which reduces the risk of smart contracts bugs.
ConclusionThis article describes why people need the threshold signature, and what inspiring properties it may bring. One can see that threshold signature has higher security, more flexible control, more efficient verification process. In fact, different signature technologies have different application scenarios, such as aggregate signatures not mentioned in the article, and BLS-based multi-signature. At the same time, readers are also welcomed to read more about secure multi-party computation. Secure computation is the holy grail of cryptographic protocols. It can accomplish much more than the application of threshold signatures. In the near future, secure computation will solve more specific application questions in the digital world.
About AuthorDr. Alex Su works for ARPA as the cryptography researcher. He got his Bachelor’s degree in Electronic Engineering and Ph.D. in Cryptography from Tsinghua University. Dr. Su’s research interests include multi-party computation and post-quantum cryptography implementation and acceleration.
About ARPAARPA is committed to providing secure data transfer solutions based on cryptographic operations for businesses and individuals.
The ARPA secure multi-party computing network can be used as a protocol layer to implement privacy computing capabilities for public chains, and it enables developers to build efficient, secure, and data-protected business applications on private smart contracts. Enterprise and personal data can, therefore, be analyzed securely on the ARPA computing network without fear of exposing the data to any third party.
ARPA’s multi-party computing technology supports secure data markets, precision marketing, credit score calculations, and even the safe realization of personal data.
ARPA’s core team is international, with PhDs in cryptography from Tsinghua University, experienced systems engineers from Google, Uber, Amazon, Huawei and Mitsubishi, blockchain experts from the University of Tokyo, AIG, and the World Bank. We also have hired data scientists from CircleUp, as well as financial and data professionals from Fosun and Fidelity Investments.
For more information about ARPA, or to join our team, please contact us at [email protected].
Learn about ARPA’s recent official news：
Telegram (English): https://t.me/arpa_community
Telegram (Việt Nam): https://t.me/ARPAVietnam
Telegram (Russian): https://t.me/arpa_community_ru
Telegram (Indonesian): https://t.me/Arpa_Indonesia
Telegram (Thai): https://t.me/Arpa_Thai
Telegram (Turkish): https://t.me/Arpa_Turkey
Korean Chats: https://open.kakao.com/o/giExbhmb (Kakao) & https://t.me/arpakoreanofficial (Telegram, new)
" And by the way, I think we can successfully make Zcash too traceable for criminals like WannaCry, but still completely private & fungible. …"Ethereum's track record of immutability is also poor. Ethereum was supposed to be an immutable blockchain ledger, however after the DAO hack this proved to not be the case. A 2016 article on Saintly Law summarised the problematic nature of Ethereum's leadership and blockchain intervention:
" Many ethereum and blockchain advocates believe that the intervention was the wrong move to make in this situation. Smart contracts are meant to be self-executing, immutable and free from disturbance by organisations and intermediaries. Yet the building block of all smart contracts, the code, is inherently imperfect. This means that the technology is vulnerable to the same malicious hackers that are targeting businesses and governments. It is also clear that the large scale intervention after the DAO hack could not and would not likely be taken in smaller transactions, as they greatly undermine the viability of the cryptocurrency and the technology."Monero provides Fungibility and Privacy in a Cashless World
"Imagine you sell cupcakes and receive Bitcoin as payment. It turns out that someone who owned that Bitcoin before you was involved in criminal activity. Now you are worried that you have become a suspect in a criminal case, because the movement of funds to you is a matter of public record. You are also worried that certain Bitcoins that you thought you owned will be considered ‘tainted’ and that others will refuse to accept them as payment."This lack of fungibility means that certain businesses will be obligated to avoid accepting BTC that have been previously used for purposes which are illegal, or simply run afoul of their Terms of Service. Currently some large Bitcoin companies are blocking, suspending, or closing accounts that have received Bitcoin used in online gambling or other purposes deemed unsavory by said companies. Monero has been built specifically to address the problem of traceability and non-fungibility inherent in other cryptocurrencies. By having completely private transactions Monero is truly fungible and there can be no blacklisting of certain XMR, while at the same time providing all the benefits of a secure, decentralized, permanent blockchain.
"A lack of fungibility means that when sending or receiving funds, if the other person personally knows you during a transaction, or can get any sort of information on you, or if you provide a residential address for shipping etc. – you could quite potentially have them use this against you for personal gain"For those that wish to seek more information about why Monero is a superior form of money, read The Merits of Monero: Why Monero Vs Bitcoin over on the Monero.how website.
All private transactions, More tested privacy tech, No tax on miners to pay investors, No high inflation... better investment.John McAfee, arguably cryptocurrency's most controversial character at the moment, has publicly supported Monero numerous times over the last twelve months(before he started shilling ICOs), and has even claimed it will overtake Bitcoin.
"Monero is a really good one. Monero is an incredible currency, it's completely private."There is a common belief that most of the money in cryptocurrency is still chasing the quick pump and dumps, however as the market matures, more money will flow into legitimate projects such as Monero. Monero's organic growth in price is evidence smart money is aware of Monero and gradually filtering in.
"In Bitcoin, the main chain is constrained and fees are ludicrous. This results in users being pushed to second layer stuff (e.g. sidechains, lightning network). Users do not have optionality in Bitcoin. In Monero, the goal is to make the main-chain accessible to everyone by keeping fees reasonable. We want users to have optionality, i.e., let them choose whether they'd like to use the main chain or second layer stuff. We don't want to take that optionality away from them."When the Spagni CNBC video was recently linked to the Monero subreddit, it was met with lengthy debate and discussion from both users and developers. u/ferretinjapan summarised the issue explaining:
"Monero has all the mechanisms it needs to find the balance between transaction load, and offsetting the costs of miner infrastructure/profits, while making sure the network is useful for users. But like the interviewer said, the question is directed at "right now", and Fluffys right to a certain extent, Monero's transactions are huge, and compromises in blockchain security will help facilitate less burdensome transactional activity in the future. But to compare Monero to Bitcoin's transaction sizes is somewhat silly as Bitcoin is nowhere near as useful as monero, and utility will facilitate infrastructure building that may eventually utterly dwarf Bitcoin. And to equate scaling based on a node being run on a desktop being the only option for what classifies as "scalable" is also an incredibly narrow interpretation of the network being able to scale, or not. Given the extremely narrow definition of scaling people love to (incorrectly) use, I consider that a pretty crap question to put to Fluffy in the first place, but... ¯_(ツ)_/¯"u/xmrusher also contributed to the discussion, comparing Bitcoin to Monero using this analogous description:
"While John is much heavier than Henry, he's still able to run faster, because, unlike Henry, he didn't chop off his own legs just so the local wheelchair manufacturer can make money. While Morono has much larger transactions then Bitcoin, it still scales better, because, unlike Bitcoin, it hasn't limited itself to a cripplingly tiny blocksize just to allow Blockstream to make money."Setting up a wallet can still be time consuming
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature:
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. 
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...Shortly thereafter, Corallo explained further:
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.Gregory Maxwell echoed and extended that perspective:
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...Peter Todd also summarized some academic findings on the subject:
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
I am - in general - in favor of increasing the size blocks...Mike Hearn responded:
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
this list is not a good place for making progress or reaching decisions.Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadershipJorge Timón responded:
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.Mike Hearn again asserted the need for a leader:
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
There must be a single decision maker for any given codebase.Bryan Bishop attempted to explain why this did not make sense with git architecture.
BCH will be distributed to settled bitcoin wallet balances as of the UTC timestamp of the first forking block, which is expected to occur on August 1st, 2017.These rules turned out to be game-able. Because Bitfinex did not charge BCH to open short positions leading up to the split, one could simply purchase 10 BTC and short 10 BTC. This way, you could collect free BCH without any exposure to BTC price volatility. If BTC drops, the shorts cancel out any loss. If BTC soars, the profits cancel out the short positions.
The token distribution methodology will be:
Due to the net amount of BTC committed in margin positions at the time of the fork, the above methodology may result in Bitfinex seeing a surplus or deficit of BCH. As such, we will be resolving this discrepancy in the form of a socialized distribution coefficient. For example, currently, there are more longs than shorts on the platform, causing a distribution coefficient of ~1.091 (Meaning that for each qualifying BTC a user will receive 1.091 BCH). The actual coefficient will be calculated at the moment of the distribution.
- All BTC wallet balances will receive BCH
- Margin longs in BTC/USD and margin shorts in XXX/BTC will not receive BCH
- Margin shorts in BTC/USD and margin longs in XXX/BTC will not pay BCH
- BTC Lenders will receive BCH
The Bitcoin Energy Consumption Index provides the latest estimate of the total energy consumption of the Bitcoin network. NEW: Study reveals Bitcoin’s electricity consumption is underestimated and finds the network “represents close to half of the current global data centre electricity use” (August 2020). I thought about pointing them to my past articles on the subject of Bitcoin. Instead, I bottom-lined it for them: I think that block chain technology is a big part of our future. However, trying ... Although Bitcoin miners are now limited to choosing from a range of ASICs to mine Bitcoin, there are still plenty of options. Picking one will depend on each miner’s individual circumstances. Some miners will want a single unit that can work in their spare bedroom. Others will want a couple of affordable ASICs to get themselves started mining for the first time. Finally, there are some ... The history and future of Bitcoin generates more academic interest year after year; the number of Google Scholar articles published mentioning bitcoin grew from 83 in 2009, to 424 in 2012, and to 3580 in 2016. Also, the academic Ledger Journal published its first issue. It is edited by Peter Rizun. Bitcoin in 2017 . Bitcoin historical chart of price for 2017, 2018. Through out the time, the ... These hash functions are one-way conversions that can’t be reversed. We won’t go to the mechanics of the functions themselves — there are plenty of great articles that cover that. Instead, we will look at how using these functions in the correct order can lead you to the Bitcoin wallet address that you can use. Elliptic Curve Cryptography
[index]          
Many people wonder how the price of Bitcoin is calculated, but it’s important to remember that it works no different than it would with other currencies or objects. Let’s first look at how the ... YouTube’s mission is to give everyone a voice and show them the world. Learn about our brand, community, careers and more. A difficult geometry puzzle with an elegant solution. Home page: https://www.3blue1brown.com/ Brought to you by Brilliant: https://brilliant.org/3b1b And by ... Lex van Dam Trading Academy Recommended for you. 59:07 . Meditations of Marcus Aurelius - SUMMARIZED - (22 Stoic Principles to Live by) - Duration: 31:14. Vox Stoica Recommended for you. 31:14 ... APA Style 7th Edition: Reference Lists (Journal Articles, Books, Reports, Theses, Websites, more!) - Duration: 23:48. Samuel Forlenza, PhD 36,586 views