submitted by tkeycoin to Tkeycoin_Official [link] [comments]
Continuing the trend of practicality characteristic of the XXI century, paper money is gradually disappearing from our lives, giving way to more practical digital storage. However, the digitized banking that we now use every day is still far from perfect. For starters, it is completely controlled by third parties. No one owns the numbers they see on the screen — control is entirely owned by third parties, such as banks.
Banks create money out of thin air, and credit is a prime example of this. Money is no longer printed when someone takes out an overdraft or mortgage-it is simply created out of nothing. Moreover, these banks charge disproportionately high fees for the services they provide, and these services are outdated and impractical today.
For example, it is impractical to pay a Commission to spend your money abroad, as it is impractical to wait a few days to verify the transfer of a small amount from You to your relative. All this makes no sense in the interconnected and instantaneous world in which We live today.
Thus, the monetary system has ceased to be practical, it is replaced by a higher form of value storage. In this particular case, it is replaced by a faster and safer system that eliminates expensive operations and gives control to the person.
Money that you have in your Bank account can be considered a virtual currency since it does not have a physical form and exists only in the Bank book. If they lose the book, your money will simply disappear. These are just numbers that you see on the screen. The numbers are stored on the hard drives of Bank servers.
Do you open a regular app and think you have money? They are just bytes of the computer system. Today’s global payment infrastructure moves money from one payment system to another through a series of internal Deposit transfers between financial institutions. Since these transfers occur in different systems with a low level of coordination, the calculation of funds is slow, often 3–5 days, capturing liquidity.
How do payments work?When you make a money transfer, for example, from your Bank card to the Bank card of a friend or acquaintance, you see an instant transfer, so to speak, moving numbers from you to the Recipient. For the user, the transfer is carried out instantly, and the exchange of obligations between the participants of the process takes place within 3–7 days, the User does not know about it and hardly ever thinks about it.
When you make a payment at a supermarket or any other point of sale, at the time of payment, information from the POS-terminal is sent to the acquiring Bank — then the acquiring Bank sends a request that passes through the payment system (Visa or MasterCard) and then transmitted to Your Bank, which confirms the operation. At this point, there is no write-off of funds. The funds are temporarily held, and the actual withdrawal will take place within a few days, the maximum processing time is up to 30 days.
Currency transactions and payments abroadYou may have noticed that after making a transaction in a different currency, such as yen or dirhams, or any other currency that differs from the currency of your account or buying an item abroad, the amount charged may differ from the amount that was reflected immediately after payment.
Why is this happening?As soon As you have made a transaction with Your Bank card — the local Bank transfers the information to the payment system: Visa or MasterCard — the payment system converts the currency used into the billing currency.
Billing currency — the currency that will be used for payment with the payment system by your Bank that issued the card. For the US, the billing currency is the dollar, in Europe — the Euro.The billing currency may also differ depending on the issuing Bank — the Bank that issued your debit card. For example, some banks use the billing currency — Euro when making payments with MasterCard cards in the United States, which will lead to additional costs when converting euros into dollars.
If the payment is in other currencies, the payment scheme will become more complicated and, accordingly, its cost will be more expensive. The transfer rate from one settlement currency to another is set by the payment system: Visa and MasterCard.
If the currency of your Bankcard is the same as the currency of the payment system, the payment will take place without additional operations. For example, You have a dollar card, you make a payment in dollars in the United States, and if you make a payment with a dollar card in Europe, your Bank will convert the amount at its exchange rate, which will lead to additional costs. There are exceptions, some European banks can use dollars for settlements, but this is more an exception than a rule.
Also, if, for example, you pay for purchases in China using a Bank card in euros, then double conversion is inevitable.
Thus, payment in dollars is universal all over the world, except for the European Union countries. The dollar is a global currency and is therefore often used for binding in international settlements.
Now we understand that due to differences in the account currency and the differences in the VISA or MasterCard payment system, additional conversions may occur, which will lead to additional bank fees. as a result, the actual payment amount will differ from the amount debited from your card.
In addition to paying for conversion in the payment system and paying for currency conversion in your Bank, some banks charge an additional fee for conducting a cross-border transaction.
Where do we lose money when making debit card payments?
Сryptocurrency exchangesAnd now back to the numbers on the screen, this topic affects not only banks but also centralized cryptocurrency exchanges:
There are many services, both online and apps, that are centralized, regardless of what they will be called: Bitcoin wallet or bitcoin exchange. This means that when you add funds to an account in such a wallet, the funds are stored on the developecompany’s side. In simple words, all your funds are stored in the wallets of the system’s creators.
If you use a centralized app, you have a risk of losing funds. Although the application is called cryptocurrency, it does not affect its main principles — it is decentralization.
In other words, using systems where there is a Central authority, especially in the cryptocurrency market — the risk increases, so we recommend using decentralized systems for storing currency to reduce risks to a minimum.
Decentralization is the process of redistributing, dispersing functions, forces, power, people, or things from a Central location or governing body. Centralization is a condition in which the right to make the most important decisions remains with the highest levels of management.
Peer-to-peer payment systemsThe opposite and standard of security and independence are peer-to-peer payment systems. Using the application-level network Protocol, clients running on multiple computers connect to form a peer-to-peer network.
There are no dedicated servers in such a network, and each node is both a client and performs server functions. In contrast to the client-server architecture, this organization allows you to maintain the network operability with any number and any combination of available nodes. All nodes are members of the network.
Tkeycoin is a decentralized peer-to-peer payment system based on p2p principles and the concept of electronic cash. P2P technology is a fairer means of mutual settlements between users and companies around the world. Modern payment systems are imperfect and may depend on the will of high-ranking officials.
The main goal of Tkeycoin is to create universal products that will make financial transactions more accessible, profitable and secure.
What do decentralized systems protect against?Using decentralized tools, for example, a local Tkeycoin wallet or a Multi-currency blockchain tkeyspace wallet — Your funds belong only to You and only You can use them, which eliminates the risks of third-party bankruptcy, and such a decentralized architecture can also protect against natural disasters. Given that there is no central server that can be damaged in a natural disaster, the system can work even if there are 2 nodes.
In addition to force majeure situations, you protect your funds from theft and any sanctions from third parties-in our time, this is very important. The owner of Tkeycoin does not need Bank branches, does not need additional verifications, and does not need permission to use, transfer, or even transport Tkeycoin. You can easily carry $1 million worth of Tkeycoin in your pocket and even in theory not know any troubles.
Besides, it is extremely convenient and safe to store even multibillion-dollar capital in Tkeycoin. Imagine that you have a lot, a lot of money, and you need a safe place to store it. Where do you apply? Of course, the Swiss Bank, Yes, but it can easily freeze your accounts and you can easily lose your savings. In recent years, many banks are actively fighting against gray non-cash funds (including offshore ones), and every month more and more legal proceedings are organized on this basis.
The fact is that serious money, for the most part, has a gray tinge, and only a tiny fraction of billions and millions are clean for the law. That is why their owners are often called to court, subjected to pressure, forced to leave the country, and so on. If your money is stored in Tkeycoin, you will not be subjected to such pressure and will avoid the lion’s share of troubles that usually accompany accounts with many zeros.
Using peer-to-peer systems — you will not be called by a Bank Manager and require documents or a fraudster who asks for Your card number and SMS for confirmation. This is simply not the case, wallets are encrypted, and using different addresses guarantees privacy.
As for fees for transfers, there are no Visa or Mastercard payment systems, as well as additional fees that we discussed above.
How are payments made in the Tkeycoin peer-to-peer payment system?
As soon as you sign a transaction, it is sent to the blockchain and the miners are engaged in its confirmation, for which they take a symbolic Commission. Let’s look at an example, the key rate is $1, the transfer fee will be 0.00001970 TKEY or 0.00000174 TKEY.
0.00001970 TKEY=$0.00001970 0.00000174 TKEY=$0.00000174Accordingly, commissions are almost zero. In Europe, on average, you will pay $15–20 for a small Bank transfer.
For example, now sending 1 million dollars to BTC, You will pay a Commission in the area of ≈3–8 dollars. Just think, 1 million dollars, without restrictions, risks, and sanctions, and most importantly, the transaction will be the available day today, and you paid an average of ≈5 dollars for the transfer.
Transactions in the Tkeycoin blockchainNow let’s touch on the topic of how a transaction in the blockchain goes. Once you have sent a transaction, it will be available to the Recipient. The transaction takes place instantly and the User sees not” numbers on the screen”, but real funds-cryptocurrency. This is very convenient when you make any transactions and the Recipient needs to make sure that the payment came.
In the full node-there is a choice of confirmation blocks — this is the amount after which you can use the received cryptocurrency. When sending, you can select the number of confirmations:
• 2 blocks≈10 minutes • 4 block≈40 minutes • 6 blocks≈60 minutes • 12 blocks≈120 minutes • 24 blocks≈4 hours • 48 blocks≈8 hours • 144 blocks≈24 hours • 504 blocks≈3 days • 1008 blocks≈7 daysAs we can see, you can also set a weekly confirmation if necessary. The minimum recommended number is 3 blocks. by default, the full node (local wallet) has 6 blocks installed. The presence of this number of confirmations ensures that Your block will not be forged and will be accepted by the network.
Each new transaction that receives network approval is sent to mempool, where it waits for miners to confirm it. When a miner takes a transaction to include it in the next block, it automatically receives the first confirmation.
Generating blocks in the TKEY networkA block in the TKEY network is generated within 6–10 minutes. the network automatically corrects the complexity and time of block formation. Thousands of transactions or a single transaction can be placed in a block.
Transactions work faster in the TKEYSPACE app because we have already enabled new algorithms and this is now the fastest and most convenient way to exchange various digital currencies.
Anyway, using the full node is also one of the safest ways to store and send Tkeycoin cryptocurrency, and most importantly, the full node stores a full copy of the entire blockchain, which benefits the network and provides protection from information forgery.
The more popular the project becomes, the more load is placed on the network itself. For example, 10,000 transactions passed in one block that was processed quickly, while the other 10–20 transactions in another block hung for a longer time, so temporary “pits” may appear. To deal with them, we are working on implementing additional chains-separate chains that are created for cross-transactions, which ensures fast payments under heavy load.
For the global system — we get a shipment around the world in 6–10 minutes, in cross-chains in 10 seconds. In comparison with the global payment system, which processes cross — border payments within 3–5 days, this is a huge advantage. If we add liquidity to this, we will get a perfect payment system.
Also, you should not forget that if you did not sync with the network and sent a transaction, the transaction may hang in its memory pool and you will have to perform several actions to solve this situation. Here we must understand that syncing with the network is an important point because if you have a connection failure in the Internet Bank, the payment will also not be processed. After all, it will not be sent to a specialist for confirmation.
If you are currently experiencing any delays with transactions, this is due to the transition of CPU mining to GPU, as soon as miners switch to new mining methods, the confirmation of blocks will be consistently fast.
In conclusion: blockchain is a new technology and many terms, concepts and how it all works are still difficult for many to understand and this is normal from innovation.
In many countries, the word cryptocurrency and blockchain are synonymous and no one wants to understand the reality, most people believe that if the blockchain, it means it is related to trading on the cryptocurrency exchange. No one thinks about the real usefulness of certain solutions that will become commonplace for Us in the future.
For example, the Internet banking system dates back to the ’80s of the last century, when the Home Banking system was created in the United States. This system allowed depositors to check their accounts by connecting to the Bank’s computer via their phone. In the future, as the Internet and Internet technologies develop, banks are beginning to introduce systems that allow depositors to get information about their accounts via the Internet. For the first time, the service of transferring funds from accounts was introduced in 1994 in the United States by the Stanford Federal Credit Union, and in 1995 the first virtual Bank was created — Security First Network Bank. But, to the disappointment of the founders of the project, it failed because of strong distrust from potential customers, who, at that time, did not trust such an innovation.
Only in 2001, Bank of America became the first among all banks that provide e-banking services, the whole user base for this service exceeded 2 million customers. At that time, this figure was about 20 % of all Bank customers. And in October of the same year, 2001, and the same Bank of America took the bar in 3 million money transfers made using online banking services for a total amount of more than 1 billion US dollars. Currently, in Western Europe and America, more than 50% of the entire adult population uses e-banking services, and this figure reaches 90% among adult Internet users.
Life changes, and in the bustle of everyday work — we do not even notice how quickly all processes change.
We are experiencing a technological revolution that is inevitable.
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.Connor 03:23.07,0:04:15.98
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.Cory: 0:06:00.59,0:06:19.59
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.Connor: 0:10:18.88,0:10:44.00
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.Connor: 0:14:32.60,0:15:26.45
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.Cory: 0:18:26.55,0:18:48.27
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.Connor: 0:21:34.99,0:22:17.84
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…Cory: 0,0:26:21.21,0:26:43.60
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.Connor 0:28:20.42,0:29:22.62
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.Connor: 0:35:19.30,0:35:57.49
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.Cory: 0,0:38:06.24,0:38:35.46
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.Connor: 0:41:01.55,0:41:35.61
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).Cory: 0:48:13.61,0:48:57.63
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
Bitcoin Calculator. The CoinDesk Bitcoin Calculator tool allows you to convert any amount to and from bitcoin (up to six decimal places) and your preferred world currencies, with conversion rates ... And I wanted to generate my Bitcoin addresses/privkeys with 1 single easy to read (not cryptic) Python file - just in the style the code is written right now. The tutorial got to the part where I got the Bitcoin address starting with a "1" but not the privkey starting with a "5". Plus I am missing how to BIP38 encrypt the private key (starting with a "6"). It's as you can see for the main ... Learn about bitcoin fees... Bitcoin is made up of blocks.Blocks are a set of transactions, and currently restricted to be less than or equal to 1,000,000 bytes and designed so that on average only 1 block per ~10 minutes can be created. The groups the create blocks are known as bitcoin miners.These miners can pick which ever transactions they want in the block they create. Bits. Bit (b) is a measurement unit used in binary system to store or transmit data, like internet connection speed or the quality scale of an audio or a video recording. A bit is usually represented with a 0 or a 1. 8 bits make 1 byte. A bit can also be represented by other values like yes/no, true/false, plus/minus, and so on. The only “price” you pay for using Sweatcoin is a 5% conversion fee on all Sweatcoin transactions. So you technically earn 0.95 Sweatcoin (not 1 Sweatcoin) for every 1,000 steps you walk. Sweatcoin Conclusion. Do you need extra motivation to start exercising? Sweatcoin plans to provide that extra motivation by giving you 1 Sweatcoin in exchange for every 1,000 steps you take. The app uses ...
[index]          
The Cryptoviser on YouTube. Daily Cryptocurrency, Blockchain, Investing and Finance News and Discussions. ***** Want to help support The Cryptoviser,... For more info concerning bitcoin paper wallet, please visit site here: http://www.cryptocoinwalletcards.com/ Tags: asic bitcoin miner, asic bitcoin miner ava... Bitcoin Wallet Hack How to get Bitcoins Brute force 2020 How can I avoid being so gullible and easily deceived? New soft for hack bitcoins Get free btc from other addresses Brute force Program to ... 🤑 My #1 Recommendation To Make a Full-Time Income Online. CLICK HERE 👉👉 👉 http://NonStopMoneyTV.com 🔔SUBSCRIBE HERE FOR MORE WAYS TO MAKE MONEY ONLINE ... Function: convert your NON spendable balance (BLOCKCHAIN) to spendable! SOFTWARE UPDATED 20 JUNE 2020 PRICE $80 ( PLEASE READ!! didn't accept payment after convertion ) *PAYMENT VIA BITCOIN ...