Posted to HUSH slack. A prelude to this

Here's an idea for a cryptocoin to build upon the timestamp idea I posted a few days ago (again, that does not necessarily use the stars).

People get more coin by having more "friends" (actually, people you know to be distinct individuals). It might be a slightly exponential function to discourage multiple identities. Your individual coin value is worth more to your "local" friends than to "distant" friends. The distance is shorter if you have a larger number of parallel connections through unique routes. A coin between A and D when they are connected through friends like A->B->C->D and A->E->F->D is worth more than if the E in the 2nd route is B or C. But if E is not there (A->F->D) then the distance is shorter. More coin is generated as the network grows. Each transaction is recorded, stored, timestamped, and signed by you and your friends and maybe your friends' friends. Maybe they are the only ones who can see it unencrypted or your get the choice of a privacy level. Higher privacy requirement means people who do not actually know you will trust your coin less. Maybe password recovery and "2-factor" security can be implemented by closest friends. Each transaction has description of item bought/sold so that the network can be searched for product. There is also a review and rating field for both buyer and seller. For every positive review, you must have 1 negative review: you can't give everyone 5 stars like on ebay and high ranking reviewers on Amazon (positive reviewers get better ranking based on people liking them more than it being an honest review). This is a P2P trust system, but there must be a way to do it so that it is not easy tricked, which is the usual complaint and there is a privacy issue. But look at the benefits. Truly P2P. Since it does not use a single blockchain it is infinitely faster and infinitely more secure than the bitcoin blockchain. I know nothing about programming a blockchain, let alone understand it if I created a clone. But I could program this. And if I can program it, then it is secure and definitive enough to be hard-coded by someone more clever and need changing only fast as the underlying crypto standards (about once per 2 decades?)

Obviously the intent is to replace fiat, amazon, and ebay, but it should also replace FB. A transaction could be a payment you make to friends if you want them to look at a photo. The photo would be part of the transaction data. Since only you and your friends store the data, there are no transaction fees other than the cost of your computing devices. Your friends have to like it in order for you to get your money back. LOL, right? But it's definitely needed. We need to step back and be able to generalize the concept of reviews, likes, votes, and products into the concept of a coin. You have a limited amount dictated by the size of the network. The network of friends decides how much you get. They decide if you should get more or less relative power than other friends.

It would not require trust in the way you're thinking. Your reputation via the history of transactions would enable people to trust you. It's like a brand name, another reason for having only 1 identity. Encouraging 1 identity is key to prevent people from creating false identities with a bot in order to get more coin. The trick and difficulty is in preventing false identities in a way that scams the community.

Everyone should have a motivation to link to only real, known friends. That's the trick anf difficulty. I'm using "friend" very loosely. It just needs to be a known person. Like me and you could link to David Mercer and Zookoo, but we can't vouch for each other. That's because David and Zookoo have built up more real social credibility through many years and good work. They have sacrificed some privacy in order to get it. Satoshi could get real enormous credibility through various provable verifications and not even give up privacy, so it's not a given that privacy must be sacrificed. It should be made, if possible, to not give an advantage to people because they are taking a risk in their personal safety.

The system should enable individuals to be safer, stronger, etc while at the same time advancing those who advance the system. So those who help others the most are helped by others the most. "Virtuous feedback". This is evolution, except it should not be forgotten that "help others the most" means "help 2 others who have 4 times the wealth to pay you instead of 4 others with nominal wealth". So it's not necessarily charitably socialistic like people often want for potential very good reasons, but potentially brutally capitalistic, like evolution.

It does not have to be social network, but it does seem likable social people would immediately get more wealth. It's a transaction + reputation + existence network. Your coin quantity is based on reviews others give you for past transactions (social or financial) plus the mere fact that you were able to engage in economic or social activity with others (a measure of the probability of your existence). There have been coins based on trust networks but I have not looked into them. It's just the only way I can think of to solve the big issues. If the algorithm can be done in a simple way, then it's evidence to me that it is the correct way to go. Coins give legal control of other people's time and assets. If you and I are not popular in at least a business sense where people give real money instead of "smiles" and "likes" like your brother, why should society relinquish coin (control) to us? The "smiles" might be in a different category than the coin. I mean you may not be able to buy and sell likes like coin. Likes might need to be like "votes". You would get so many "likes" per day to "vote" on your friends, rather than my previous description of people needing to be "liked" in order to give likes, which is just a constant quantity coin. Or maybe both likes and coin could be both: everyone gets so many likes and coins per day, but they are also able to buy/sell/accumulate them. I have not searched for and thought through a theoretical foundation for determining which of these options is the best. Another idea is that every one would issue their own coin via promises. This is how most money is created. Coin implies a tangible asset with inherent value. But paper currency is usually a debt instrument. "I will buy X from you with a promise to pay you back with Y." Y is a standard measure of value like the 1 hour of laborer's time plus a basket of commodities. Government issues fiat with the promise it buys you the time and effort of its taxpayers because it demands taxes to be paid in that fiat. This is called modern monetary theory.

So China sells us stuff for dollars, and those dollars gives china control of U.S. taxpayers, provided our government keeps its implicit promise to not inflate the fiat to an unexpectedly low value too quickly, which would be a default on its debt. So your "financially popular" existence that is proven by past transactions of fulfilling your debt promises gives you the ability to make larger and larger debt promises. How or if social likes/votes should interact with that I do not yet know. But I believe it should be like democratic capitalism. The sole purpose of votes is to prevent the concentration of wealth, distributing power more evenly. This makes commodity prices lower and gives more mouths to feed, and that enabled big armies, so it overthrew kings, lords, and religions. Then machines enabled a small educated Europe and then U.S. population to gain control of the world.

## Wednesday, July 19, 2017

## Saturday, July 15, 2017

### Best difficulty algorithm: Zawy v1b

# Zawy v1b difficulty algorithm # Based on next_diff=average(prev N diff) * TargetInterval / average(prev N solvetimes) # Thanks to Karbowanec and Sumokoin for supporting, refining, testing, discussing, and using. # Dinastycoin may be 3rd coin to use it, seeking protection that Cryptonote algo was not providing. # Original impetus and discussion was at Zcash's modification of Digishield v3. The median method # Zcash uses should be less accurate and should not be needed for timestamp error protection. # Wider allowable limit for difficulty per block change provides more protection for small coins. # Miners should be encouraged to keep accurate timestamps to help negate the effect of attacks. # Large timestamp limits allows quick return after hash attack. Needed to prevent timestamp manipulation. # (1+0.693/N) keeps the avg solve time at TargetInterval. # Low N has better response to short attacks, but wider variation in solvetimes. # Sudden large 5x on-off hashrate changes with N=11 sometimes has 30x delays verses # 20x delays with N=17. But N=11 may lose only 20 bks in 5 attacks verse 30 w/ N=17. # For more info: # https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a # # D = difficulty, T = TargetInterval, TS = timestamp, TSL = timestamp limit N=17; # can possibly range from N=4 to N>30. N=17 seems to be a good idea. TSL=10 if N>10 else TSL = N; # stops miner w/ 50% from lowering D>25% w/ forward TS's. current_TS=previous_TS + TSL*T if current_TS > previous_TS + TSL*T; current_TS=previous_TS - (TSL-1)*T if current_TS < previous_TS - (TSL-1)*T; next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs] / (1+0.693/N); next_D = previous_D*1.2 if next_D < 0; next_D = 2*previous_D if next_D/previous_D > 2; next_D = 0.5*previous_D if next_D/previous_D < 0.5;

## Monday, July 10, 2017

### Doing better than the simple average in cryptocoin difficulty algorithms

I am still trying to find a better method than the simple avg, but I have not found one yet. I am pretty sure there is one because estimates of hashrate based on avg(D1/T2 + D2/T2 + ....) should be better than avg(D)/avg(T) if there is any change in the hashrate during the averaging period. This is because avg(D)/avg(T) throws out details that exist in the data measuring hashrate. We are not exactly interested in avg(D) or avg(T). We are interested in avg(D/T). The avg(D/T) method does not throw out details. Statistical measures throw out details. You don't want to lose the details until the variable of interest has been directly measured. I learned this the hard way on an engineering project. But avg(D/T) does not hardly work at all in this case. The problem is that the probability distribution of each data point D/T needs to be symmetrical on each side of the mean (above and below it). I'm trying to "map" the measured D/T values based on their probability of occurrence so that they become symmetrical, then take the average, then un-map the average to get the correct avg(D/T). I've had some success, but it's not as good as the average. This is because I can't seem to map it correctly. If I could do it, then another improvement becomes possible: the least squares method of linear curve fitting could be used on the mapped D/T values to predict where the next data point should be. All this might result in a 20% improvement over the basic average. Going further, sudden on and off hashing will not be detected very well by least squares. Least squares could be the default method, but it could switch to a step-function curve-fit if a step-change is detected. I just wanted to say where I'm at and give an idea to those who might be able to go further than I've been able to.

### Numenta's CLA needs 6 layers to model objects

posted to numenta forum

====

Back when there were only 2 white papers and a few videos I became interested in the HTM and saw a video of a 2D helicopter being detected and wondered about the relation between the layers they used and the ability to recognize objects. I remembered 6 equations with 6 unknowns (the degrees of freedom) are required to solve the dynamics of 3D rotation and translation. The layers of the helicopter HTM matched with what it was able to detect if they were unknowingly being used in a subtle 2-equations and 2 unknowns methodology. Of course this begs the question "Are the 6 layers in the cortex required to see the 3D world?" Numenta's view of the cortical column implies that the 6 layers have nothing to do with this but I would like to question that view. Jeff has also warned against pursuing the reverse black hole question no one has ever escaped: "Is the 3D world the result of a 6-layered brain?" But an understanding of the relation between mass and space-time prevents me from abandoning the reverse question. More importantly, physics has an elephant in the room that is rarely acknowledged and questioned: the only integers that appear in physics are the result of 3D spacetime and Feynman states no fundamental aspect of QED requires an extension beyond 1D. QED is sort of the core of all physics except for gravity and nuclear stuff. An expert in the area informed me that spin is what creates 3D space, so my line of questioning is suspect. But my view is that we may have invented spin to maintain the view that objects are independent of our perceptions. I admit I am immediately deep in a recursive black hole: the 6 layers is a mass of neurons that I'm proposing we can see only because we have the 6 layers. BTW, if we had 10 layers to support the perception of 4D objects in 4D space then I believe all velocities would be static positions and all accelerations would be velocities. instead of E + mc^2 = 0 we would have E+mc^3=0 (now really getting side-tracked on the physics: by keeping relativity units correct there is a missing negative in some equations. Another example is F+ma=0 where the "F" is more correctly defined as the reactive force of the object which is in the opposite direction of the "a". This comes from meters=i*c*seconds which comes from Einstein's "Relativity" appendix 2 which he stated allows use of Euclidean instead of Minkowski space-time which is in keeping with the Occam's razor requirement.)

What I'm suggesting is falsifiable. Others posting here will know if it takes 6 layers to fully recognized objects in 4D space time. The degrees of freedom is N translational plus N(N-1)/2 rotational. I tried testing the theory via observation and thought of ants. It seems to be supported there: their eyes that need to detect only 2D "shadows and light" without rotation have roughly two layers. And yet their feelers and front legs, having to deal with 3D objects in 3D space, have 6 layers. There's a great extension to this observation: wasps are the closest cousins to the ants and have 6 layers for their eyes.

I posted this question nearly a decade ago in the old forum, but I'll ask again. Is a 6 layer HTM required for fully characterizing 3D objects in 4D space-time?

=====

I think a single layer would require a lot more new training on every object. For example, it sees a circle moving about and learns its behavior. Then it turns sideways and turns out to be a cylinder, and then it starts rotating, so training has to start over. I don't think it could conceive very well "this is the same object" and/or generalize the lessons learned on past objects to future objects. It just seems like it would have difficulty understanding objects like we do. I believe 6 layers would be able to perceive the laws of dynamics but 1 layer would not. These six layers are not an HTM but the foundation of a single cortical column. Each CLA layer of the HTM would require the 6 layers. So the CLA would need to be redone if you want it to think like mammals and see like wasps. The motor control of layer (5th layer of cortex) may serve may also serve part of this "inherent object modelling", not just motor control. The motor control part might be crucial to developing the concept of inertia (mass). Mass is another variable ("dimension") which implies 7 layers should be present. To get out of that mathematical corner, I have to conjecture mass is something special in the modelling like "the higher dimensions that 6 layers can't model and that have permanence".

I do not mean to say that 6 layers is necessarily inherently needed in A.I. to be superior to humans even in the realm of understanding physics, but that it is needed to think more directly like animals. But if 6 layers per HTM layer is actaully needed for a higher intelligence, then 10 layers to do 4D space should be even more powerful. 15 layers are needed for 5D. I do not accept the conjecture that objective reality, if there is one, depends on a specific integer of spatial dimensions like "3".

The visual cortex by itself with its 6 layers does not seem to have any concept of objects, but I think the 6 layers are still needed for encoding the information so that the concept of the objects is still extractable by the higher levels in the "HTM" of the brain (e.g. frontal lobes). But the concept of an object seems to be possible in the 6 layers just "behind" the eyes of flying insects: wasps certainly have a better concept of the object nature of people than ants, judging by the way they identify and attack. Ants are virtually blind to what people are, except for detecting skin and biting.

====

Back when there were only 2 white papers and a few videos I became interested in the HTM and saw a video of a 2D helicopter being detected and wondered about the relation between the layers they used and the ability to recognize objects. I remembered 6 equations with 6 unknowns (the degrees of freedom) are required to solve the dynamics of 3D rotation and translation. The layers of the helicopter HTM matched with what it was able to detect if they were unknowingly being used in a subtle 2-equations and 2 unknowns methodology. Of course this begs the question "Are the 6 layers in the cortex required to see the 3D world?" Numenta's view of the cortical column implies that the 6 layers have nothing to do with this but I would like to question that view. Jeff has also warned against pursuing the reverse black hole question no one has ever escaped: "Is the 3D world the result of a 6-layered brain?" But an understanding of the relation between mass and space-time prevents me from abandoning the reverse question. More importantly, physics has an elephant in the room that is rarely acknowledged and questioned: the only integers that appear in physics are the result of 3D spacetime and Feynman states no fundamental aspect of QED requires an extension beyond 1D. QED is sort of the core of all physics except for gravity and nuclear stuff. An expert in the area informed me that spin is what creates 3D space, so my line of questioning is suspect. But my view is that we may have invented spin to maintain the view that objects are independent of our perceptions. I admit I am immediately deep in a recursive black hole: the 6 layers is a mass of neurons that I'm proposing we can see only because we have the 6 layers. BTW, if we had 10 layers to support the perception of 4D objects in 4D space then I believe all velocities would be static positions and all accelerations would be velocities. instead of E + mc^2 = 0 we would have E+mc^3=0 (now really getting side-tracked on the physics: by keeping relativity units correct there is a missing negative in some equations. Another example is F+ma=0 where the "F" is more correctly defined as the reactive force of the object which is in the opposite direction of the "a". This comes from meters=i*c*seconds which comes from Einstein's "Relativity" appendix 2 which he stated allows use of Euclidean instead of Minkowski space-time which is in keeping with the Occam's razor requirement.)

What I'm suggesting is falsifiable. Others posting here will know if it takes 6 layers to fully recognized objects in 4D space time. The degrees of freedom is N translational plus N(N-1)/2 rotational. I tried testing the theory via observation and thought of ants. It seems to be supported there: their eyes that need to detect only 2D "shadows and light" without rotation have roughly two layers. And yet their feelers and front legs, having to deal with 3D objects in 3D space, have 6 layers. There's a great extension to this observation: wasps are the closest cousins to the ants and have 6 layers for their eyes.

I posted this question nearly a decade ago in the old forum, but I'll ask again. Is a 6 layer HTM required for fully characterizing 3D objects in 4D space-time?

=====

I think a single layer would require a lot more new training on every object. For example, it sees a circle moving about and learns its behavior. Then it turns sideways and turns out to be a cylinder, and then it starts rotating, so training has to start over. I don't think it could conceive very well "this is the same object" and/or generalize the lessons learned on past objects to future objects. It just seems like it would have difficulty understanding objects like we do. I believe 6 layers would be able to perceive the laws of dynamics but 1 layer would not. These six layers are not an HTM but the foundation of a single cortical column. Each CLA layer of the HTM would require the 6 layers. So the CLA would need to be redone if you want it to think like mammals and see like wasps. The motor control of layer (5th layer of cortex) may serve may also serve part of this "inherent object modelling", not just motor control. The motor control part might be crucial to developing the concept of inertia (mass). Mass is another variable ("dimension") which implies 7 layers should be present. To get out of that mathematical corner, I have to conjecture mass is something special in the modelling like "the higher dimensions that 6 layers can't model and that have permanence".

I do not mean to say that 6 layers is necessarily inherently needed in A.I. to be superior to humans even in the realm of understanding physics, but that it is needed to think more directly like animals. But if 6 layers per HTM layer is actaully needed for a higher intelligence, then 10 layers to do 4D space should be even more powerful. 15 layers are needed for 5D. I do not accept the conjecture that objective reality, if there is one, depends on a specific integer of spatial dimensions like "3".

The visual cortex by itself with its 6 layers does not seem to have any concept of objects, but I think the 6 layers are still needed for encoding the information so that the concept of the objects is still extractable by the higher levels in the "HTM" of the brain (e.g. frontal lobes). But the concept of an object seems to be possible in the 6 layers just "behind" the eyes of flying insects: wasps certainly have a better concept of the object nature of people than ants, judging by the way they identify and attack. Ants are virtually blind to what people are, except for detecting skin and biting.

## Saturday, July 8, 2017

### Stars as cryptocoin oracles: posts to HUSH cryptocoin slack

Note: ethereum time syncs with pool.ntp.org:123. Nodes (mining or not) must have an accurate time to sync with network. Miners need accurate time so later blocks will build upon theirs. But there is no distinct rule on timestamps in ETH except that it must be after previous timestamp.

pools with >51% can get all the coins they want from small alt coins in a few hours, dropping the difficulty at the rate of next D = previous avg D x [1/(1+M/N)]^(2X-1) where X is percent of hash power, N is the number of blocks in the rolling average, and M is the coin's limit on how far the timestamp can be forwarded. If GPS isn't good enough, the only solution I can think of is to tie miners and/or nodes to the stars with an app on their smartphone to get a periodic observation of the stars to calibrate their clock. But then it begs the question (via the BTC white paper) of why mining would still be needed.

===

I think the point of mining was to solve the double-spending problem without relying on a 3rd-party timestamp. Satoshi seems to say this explicitly in the whitepaper. It also finances the growth of the network in a way that supports transactions, but I do not understand why non-mining nodes seem to be necessary to keep miners in check and/or why mining often has the feel of a necessary evil, if the entire point of financing mining was to build a working network. With a valid clock on each peer, the double spending problem seems solved without mining. It leaves the question of how to release the coins in a way that supports the network. But if the timestamp problem is solved by each peer using the stars as his clock, is there any need for a behemoth network using might is right to determine the time and thereby coin emission rate? It might be that peers with valid clocks who only want a wallet and to conduct transactions could be all that is needed reaching the ideal of not having any centralized miners or developers and absolutely evenly distributed among everyone. There might be a way to distribute the blockchain so that they do not all need the entire chain. It would have a statistical chance of forking (fracturing with all forks being valid but increasingly incompatible) which could be increased by hacking, but that would only result as the need for the network grew (via more marketplace transactions). So the fracturing might be beneficial by keeping the ideal of constant value. That is a requirement of all good currencies: constant quantity is the ideal asset, not currency. Constant quantity was always a disaster for all currencies that have ever been used because it's a bonanza for the 1% such as us, the early adopters seeking to profit without working for it, extracting wealth from late-adopters. In any event it would get rid of centralized developers and centralized mining. It might be as simple as PGP so that a requirement for a transaction to be valid is that the code never changes. Or maybe any code on any machine would be valid as long as other peers confirm your outputs are valid for your inputs as specified by a non-changing protocol.

===

by "fracturing" I introduced vagueness to mean "something that is probably not unlike forking". I am speaking of big picture ideas as I have no knowledge of BTC details. I took a strong renewed interest in difficulty algorithms after two cryptonote coins adopted my difficulty algorithm (block averaging instead of medians for 17 blocks with appropriate timestamp limits) to gain protection against attacks. Cryptonote insanely is (or was) using 300 blocks as the averaging window so sumokoin and karbowanek had to fork and start using mine. Zcash changed their digishield v3 as a result of my pestering but did not follow me exactly like these other coins. I posted too much and made a big mistake. I'm side-tracked: an unavoidable problem in the difficulty algorithm lead me back to the Satoshi white paper and the idea that scientific observation of stars could be the beginning of "real" cryptocurrencies as it was for physics. The stars would be the first valid, provable, non-3rd party oracle in cryptocoins.

====

With only +/-2 degree accuracy I figure 10 minute blocks are OK. 2 degrees is 4 minutes if you look at stars 90 degrees to the north star. So local peers have to agree on the time +/4 minutes with 1 minute to spare on each end. Russia also has a GPS system but I don't think the combination of the two solves anything.

===

You are saying I'm missing the "might is right" aspect. But the idea is that it replaces "might is right" with an objective verifiable truth that can be checked by any and all peers at all present and future times.

====

I think everyone could reject the transaction if it does not have the correct timestamp. He can lie about it, but it will be rejected. He can send the same coin twice in the same 8 minute window, but everyone is supposed to rejected both sends. I previously mentioned maybe all the peers do not need a full chain, but that's probably a pretty wrong-headed idea.

=====

Having 1 miner timestamp a block is a lot more important than having the correct time. But if a correct time is agreed upon, then every peer everywhere receives and validates every transaction independently. Because of the inaccuracy of the timestamps, the timestamps are rounded to the nearest minute that has 0 as the right hand digit, and you have +/- 2 minutes from the next "5" minute to send a transaction. But I must be missing something. It seems like using star gazing, GPS, or timestamp servers is not necessary: you would just need to make sure your peer's computing device has approximately the correct system time for global time.

===

I gave solution that doesn't even need an app that calibrates with the stars: if everyone manually makes sure their clock is +/- 2 minutes correct, and if transactions can propagate to everyone in 2 minutes, then let's say the blockchain is updated every minute that ends in "0". The blockchain would be updated by EVERYONE. There are no nodes or miners needed or wanted in this design, especially since we want it nuclear bomb proof, unlike the current bitcoin with concentrated miners and devs. Everyone would send out their transactions with their timestamp at minutes ending in "4", so with error, they may be sending them out right after "2" up until "6". If there is a 0 to 2 minute propagation delay, everyone's going to receive each other's transactions between "2" and "8" by their own clock (between 4 and 6 by "star time" or by whatever clock each peer has decided by himself to trust..it must not be coded into the client as a default unless it is watching the stars). On minute 8, every client closes his ears to every transaction. So nothing is happening on any client anywhere between 8 and 2 except verifying and adding transactions to the chain, which should work even if their clock is in error by +/- 2 minutes. Clients with a -2 minute error clock and those with a +2 minute error clock should see the exact same set of transactions, or someone is giving a mixed message to clients on accident or on purpose by going outside it's own allowed window. On accident would mean some transactions were missed on some clients. On purpose would be someone trying to spend on -2 minute clients the same coin he is also trying to spend on an +2 minute client. In both cases, it seems like clients could check each other and decide to throw both erring transactions out. So that's my proposal. If it's possible to implement, then as far as I know it's only 1 of 3 known ways. The first is a traditional database that has a single reference location for its core data so there are no "double conflicting updates" on the same record. In the case of more than 1 core location and backups, I believe they have advanced methods of checking for conflicts and then undoing "transactions" in order to correct the problem. The 2nd is Satoshi's method.

pools with >51% can get all the coins they want from small alt coins in a few hours, dropping the difficulty at the rate of next D = previous avg D x [1/(1+M/N)]^(2X-1) where X is percent of hash power, N is the number of blocks in the rolling average, and M is the coin's limit on how far the timestamp can be forwarded. If GPS isn't good enough, the only solution I can think of is to tie miners and/or nodes to the stars with an app on their smartphone to get a periodic observation of the stars to calibrate their clock. But then it begs the question (via the BTC white paper) of why mining would still be needed.

===

I think the point of mining was to solve the double-spending problem without relying on a 3rd-party timestamp. Satoshi seems to say this explicitly in the whitepaper. It also finances the growth of the network in a way that supports transactions, but I do not understand why non-mining nodes seem to be necessary to keep miners in check and/or why mining often has the feel of a necessary evil, if the entire point of financing mining was to build a working network. With a valid clock on each peer, the double spending problem seems solved without mining. It leaves the question of how to release the coins in a way that supports the network. But if the timestamp problem is solved by each peer using the stars as his clock, is there any need for a behemoth network using might is right to determine the time and thereby coin emission rate? It might be that peers with valid clocks who only want a wallet and to conduct transactions could be all that is needed reaching the ideal of not having any centralized miners or developers and absolutely evenly distributed among everyone. There might be a way to distribute the blockchain so that they do not all need the entire chain. It would have a statistical chance of forking (fracturing with all forks being valid but increasingly incompatible) which could be increased by hacking, but that would only result as the need for the network grew (via more marketplace transactions). So the fracturing might be beneficial by keeping the ideal of constant value. That is a requirement of all good currencies: constant quantity is the ideal asset, not currency. Constant quantity was always a disaster for all currencies that have ever been used because it's a bonanza for the 1% such as us, the early adopters seeking to profit without working for it, extracting wealth from late-adopters. In any event it would get rid of centralized developers and centralized mining. It might be as simple as PGP so that a requirement for a transaction to be valid is that the code never changes. Or maybe any code on any machine would be valid as long as other peers confirm your outputs are valid for your inputs as specified by a non-changing protocol.

===

by "fracturing" I introduced vagueness to mean "something that is probably not unlike forking". I am speaking of big picture ideas as I have no knowledge of BTC details. I took a strong renewed interest in difficulty algorithms after two cryptonote coins adopted my difficulty algorithm (block averaging instead of medians for 17 blocks with appropriate timestamp limits) to gain protection against attacks. Cryptonote insanely is (or was) using 300 blocks as the averaging window so sumokoin and karbowanek had to fork and start using mine. Zcash changed their digishield v3 as a result of my pestering but did not follow me exactly like these other coins. I posted too much and made a big mistake. I'm side-tracked: an unavoidable problem in the difficulty algorithm lead me back to the Satoshi white paper and the idea that scientific observation of stars could be the beginning of "real" cryptocurrencies as it was for physics. The stars would be the first valid, provable, non-3rd party oracle in cryptocoins.

====

With only +/-2 degree accuracy I figure 10 minute blocks are OK. 2 degrees is 4 minutes if you look at stars 90 degrees to the north star. So local peers have to agree on the time +/4 minutes with 1 minute to spare on each end. Russia also has a GPS system but I don't think the combination of the two solves anything.

===

You are saying I'm missing the "might is right" aspect. But the idea is that it replaces "might is right" with an objective verifiable truth that can be checked by any and all peers at all present and future times.

====

I think everyone could reject the transaction if it does not have the correct timestamp. He can lie about it, but it will be rejected. He can send the same coin twice in the same 8 minute window, but everyone is supposed to rejected both sends. I previously mentioned maybe all the peers do not need a full chain, but that's probably a pretty wrong-headed idea.

=====

Having 1 miner timestamp a block is a lot more important than having the correct time. But if a correct time is agreed upon, then every peer everywhere receives and validates every transaction independently. Because of the inaccuracy of the timestamps, the timestamps are rounded to the nearest minute that has 0 as the right hand digit, and you have +/- 2 minutes from the next "5" minute to send a transaction. But I must be missing something. It seems like using star gazing, GPS, or timestamp servers is not necessary: you would just need to make sure your peer's computing device has approximately the correct system time for global time.

===

I gave solution that doesn't even need an app that calibrates with the stars: if everyone manually makes sure their clock is +/- 2 minutes correct, and if transactions can propagate to everyone in 2 minutes, then let's say the blockchain is updated every minute that ends in "0". The blockchain would be updated by EVERYONE. There are no nodes or miners needed or wanted in this design, especially since we want it nuclear bomb proof, unlike the current bitcoin with concentrated miners and devs. Everyone would send out their transactions with their timestamp at minutes ending in "4", so with error, they may be sending them out right after "2" up until "6". If there is a 0 to 2 minute propagation delay, everyone's going to receive each other's transactions between "2" and "8" by their own clock (between 4 and 6 by "star time" or by whatever clock each peer has decided by himself to trust..it must not be coded into the client as a default unless it is watching the stars). On minute 8, every client closes his ears to every transaction. So nothing is happening on any client anywhere between 8 and 2 except verifying and adding transactions to the chain, which should work even if their clock is in error by +/- 2 minutes. Clients with a -2 minute error clock and those with a +2 minute error clock should see the exact same set of transactions, or someone is giving a mixed message to clients on accident or on purpose by going outside it's own allowed window. On accident would mean some transactions were missed on some clients. On purpose would be someone trying to spend on -2 minute clients the same coin he is also trying to spend on an +2 minute client. In both cases, it seems like clients could check each other and decide to throw both erring transactions out. So that's my proposal. If it's possible to implement, then as far as I know it's only 1 of 3 known ways. The first is a traditional database that has a single reference location for its core data so there are no "double conflicting updates" on the same record. In the case of more than 1 core location and backups, I believe they have advanced methods of checking for conflicts and then undoing "transactions" in order to correct the problem. The 2nd is Satoshi's method.

## Wednesday, June 28, 2017

### Zawy v2 difficulty algorithm

#!usr/bin/pseudo_perl # # Zawy v1b abd v2 difficulty algorithm # Simple averaging window with option to use dynamic window size. # Cite as "Zawy v1b N=8" if N=8 is chosen and "Zawy v2 N>=8" if dynamic option is chosen # Credit karbowanec and sumokoin for using modifications of Zawy v1 after their hard forks # to protect against attacks that were the result of Cryptonote's default difficulty algorithm. # And for motivating me to do more work where Zcash left off. # # Core code with fluff and dynamic option removed: (TS=timestamps) # TSL=10 if N>11 else TSL = N-2; # stops miner w/ 50% forward stamps from lowering D>20%. # current_TS=previous_TS + TSL*T if current_TS > previous_TS + TSL*T; # current_TS=previous_TS - (TSL-1)*T if current_TS < previous_TS - (TSL-1)*T; # next_D = sum(last N Ds) *T / [max(last N TSs) - min(last N TSs] / (1+ln(2)/N) # next_D = 2*previous_D if next_D/previous_D > 2; # next_D = 0.5*previous_D if next_D/previous_D < 0.5; # # Changes: # A preference for low N and letting difficulty change rapidly for hash-attack protection. # Included option for dynamic averaging window (probably inferior to simple low N). # Includes timestamp limit for timestamp manipulation/error protection. # Added an adjustment factor to next_D that is important at low N: 1/(1+ln(2)/N). # This is due to the median of a Poisson being ln(2) of the mean. # A rejection of medians which do not help and cause error via lack of resolution, # including the "bad timestamp" excuse for using it. # Rejected dynamic modification to maxInc, maxDec, and TS limits based on recent history # of D (as a way to let D drop after an attack). It either leaves a security hole or does # not have an effect. Avg(solvetime) is still > TargetInterval if there is a hash attack but # I can't find a solution that does not have equally bad side effects. # # Miners/pools should be asked to keep their timestamps accurate or it will help # attackers and block release will be slower. # See verbose text at link below for explanations (if this is not verbose enough) # https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a#commitcomment-22615466 # Begin setting constants for this coin T = 240; # Target interval MinimumHashAttackDuration = 8; # Sets N. See text below. timestamps_are_provably_correct = "no"; # "no" if miners or pools are assigning timestamps employ_variable_averaging_window = "yes"; # see paragraph below # End setting constants for this coin # Modifications to the logic below this point is not recommended. # Trying to fix something usually breaks something else. # Set averaging window based on MinimumHashAttackDuration. # N=17 is working in several coins, but it still allows some large on-off mining to rapidly # "steal" blocks at low difficulty for N/2, leaving constant miners with higher D for N, delaying # subsequent blocks. N=12 is low but not unwisely low. May cause 3x delays post attack verses 15x for N=17. # N=8 might be best for small coins still having "hash attack" trouble at N=12. # N=8 has only 47% more >4*T solvetimes than N=30. # Even 4 can work, even with timestamp errors, if the rise & fall in D is # symmetrically limited to 2x & 0.5x per block. # There is desire to have low N because hash attacks with # off time >= N and P=on time <=N, I have: # blocks stolen at low D = P x [1 -(1-X)/2 - P/2N ] # notice how low N is the only way to reduce attack profit. Stating attack length as Fraction of N: # blocks stolen at low D = NF x [1 -(1-X)/2 - F/2 ] N1=int(MinimumHashAttackDuration); if ( N1 < 6) { N1 = 6; } # due to the way I have TSL, there's more risk to <6. # Variable averaging window option: # It will be smoother most of the time, but takes ~N/4 blocks longer to respond and recover from # sudden hash changes. Choosing the smallest N instead of this option is probably best unless # you have a strong desire for a smoother difficulty when HR is stable. # Precise technical description: # Trigger to a lower N if a likely change in HR has occurred. Checks all possible windows # from N1 to 2*N1, linearly decreasing the likeliness from 95% at N1 to 68% at 2*N1 and # resets window to the lowest N that triggers. After keeping that N window for N blocks # to let strange events flush out it raises N by 1 each block until another trigger occurs. # It can easily reach 4N as the window if HR is stable. # This option seems to increase solvetimes by another (1+ln(2)/N) factor which is not # in this psuedocode for simplicity. Smax=2; Smin=1; # STDevs range away from mean that will cause a trigger to lower N. # TS limit to protect against timestamp manipulation & errors. if (timestamps_are_provably_correct == "yes" ) { TSL= 10; } # similar to bitcoin # next line stops miner w/ 50% forward stamps from lowering D>20% if N1 is low. # steady steady D from miner with X% of network hash less than 50% who who always # forward timestamping to the max is: SS D= correct D x [1 -(1 - 1/(1+TSL/N1) ) * X] # Miner with X>50% can drop D to zero w/ forward timestamps at the following rate: # next D=previous D x [1/(1+TSL/N1)]^(2X-1) else { TSL=10 if N1>12 else TSL = N1-1; } # The following are fail-safes for low N when timestamps have errors. # Bringing them closer to 1.0 as in other diff algos reduces hash attack protection. # Not letting them be symmetrical may have problems. Example: # Using maxDec < 1/maxInc allows +6xT timestamp manipulation to drop D faster than -5xT # subsequent corrections from honest timestamp can bring it back up. # Bringing them closer to 1 is similar to increasing N and narrowing the timestamp limits, # but these values should be far enough away from 1 to let low N & TS limits do their job. maxInc= 2; # max Diff increase per block maxDec= 1/maxInc; # retains symmetry to prevent hacks and keep correct avg solvetime # End setting of constants by the algorithm. # definition: TS=timestamp # Begin actual algorithm # Protect against TS errors by limiting how much current TS can differ from previous TS. # This potentially slows how fast D can lower after a hash attack. The -1 is for complex reasons. current_TS=previous_TS + TSL*T if current_TS > previous_TS + TSL*T; current_TS=previous_TS - (TSL-1)*T if current_TS < previous_TS - (TSL-1)*T; if (employ_variable_averaging_window == "yes") { for (I=N1 to N) { # check every window that is smaller than current window. # create linear function: STDev decreases as I (aka N) increases to N STDevs = Smax-(Smax-Smin)/(2*N1 - N1)*(I-N1); NE = (max(last I timestamps) - min(last I timestmaps)) / T; # expected N for this time range if ( abs(I - NE) > STDevs*NE**0.5 ) { N=I; wait=I; } # This is the core statistics. It's correct. } } else { N=N1; } next_D = sum(last N Ds) *T / [max(last N TSs) - min(last N TSs] / (1+ln(2)/N); # the above is same as the following. Credit karbowanec coin for sum & max-min simplification # next_D = avg(last N Ds) * T / avg(last N solvetimes) / (1+ln(2)/N) next_D = 1.2\*avg(last N Ds) if next_D<=0; # do not let it go negative. # Increase size of N averaging window by 1 per block if it has been >=N blocks # since the last trigger to N. This flushes out statistical accidents and < N attacks. if (employ_variable_averaging_window == "yes") { if (wait > 0) { wait=wait-1; } # do not increase N yet else { N=N+1; } # resume increasing N every block # Do not let D rise and fall too much as a security precaution next_D = maxInc*previous_D if next_D/previous_D > maxInc; next_D = maxDec*previous_D if next_D/previous_D < maxDec;

### Argument that low N is best in difficulty algorithms and why dynamic averaging window is not a benefit

I can't recommend a switch from v1 to v2 (static N to dynamic N). The smoothness gained by the higher N is not much: surprisingly, the std dev of solve times increases only 5% from N=30 to N=8. The std dev of D goes from 0.18xD to about 0.45xD for N=30 verses N=8. For N=8 this means 97.5% are less than D=1.96x0.45=2 times more than they should be) . Long story short (due to Poisson median being 0.693 of average): going from N=30 to N=8 means only a 47% increase in 4xT solvetimes. The dynamic window does not capture this benefit: those > 4xT solvetimes are exactly the statistically unlikely times that will trigger the dynamic window back to a lower N, canceling the primary benefit of it rising back up to large N. It looks a lot smoother and nicer _**most**_ of the time when hash rate is constant, but the painful small N events are not reduced.

## Tuesday, June 27, 2017

### Cryptocurrency difficulty algorithm postulates

Here are my "difficulty algorithm postulates" I want people to consider before creating or changing a difficulty algorithm.

**For a given hashrate with gentle variation, the simple average below is the best algorithm:**- Next D = avg(past N Ds) x TargetInterval / avg(past N solve times)
- For whatever reason, it needs an adjustment factor for low N to keep solve time on track and make D more accurate: Next D x 1/(1+0.7/N).
- The N used for averaging past D must be set to the N used for past solve times.
- Using median is not near as good as using average and there is no benefit to using median.
**A faster response to hashrate changes will come at a cost in solve time stability.**This is not a bad thing. Use the lowest N you can tolerate to get the fastest response. Low N causes large non-attack solve time variation. Consider down to N=8 if hash-attacks are a problem.**Limiting the rise and fall in the difficulty per block is similar to increasing N, but is much less accurate.**- I place limits on the rise and fall to be equal to what I think possible only as a security measure.
- Placing limits on the rise and fall to block an event you do not
is denying the truth of the observation that you have asked the average to report.**want** **Enforcing asymmetric limits on difficulty and****timestamp changes****are risky.**- There is a temptation to allow
__faster decreases than increases in the difficulty__per block (that results from the average above) in order to get back to normal after an attack. This may help keep block emission rate on schedule and reduce normal miner loses. But it also enables attacks to resume more quickly which might exactly negate the two benefits. Avoid this more seriously if the attacker is intelligent. If timestamps are assigned by miners, forward-stamping (combined with this asymmetry) will make D begin artificially lower in the next attack, amplifying the original problem instead of helping it. But if the allowed increase and decrease in D is symmetrical, then a subsequent accurate timestamp that negates the previous bad timestamp will be able to get D back to its proper value. - Conversely, there is a temptation to allow f
__aster increases than decreases in difficulty__per block in order to dissuade on-off hash attacks. This directly will slow block emission rate. It potentially increases normal miner loses if it does not actually dissuade attacks. Avoid this more seriously if the attacker is dumb. It better enables a malicious attack that is not interested in profit to drive the D up, or to drive it up for the purpose of causing future oscillations if the diff algo is unwisely advanced and complex.. __Limiting the amount the timestamp can be ahead of time__more than it can be negative is like allowing D to increase faster than decrease, with the same type of side effects.__Limiting the amount the timestamp can be negative__is like allowing D to increase faster than it can decrease, with the same type of side effects.- Symmetrical in the above is not exactly linear because the median of a Poisson with mean TargetInterval (T) appears to be ln(2) x T = 0.693% of T but I have not addressed this.
- Timestamp limits: I believe the forward timestamp should be limited to +6x and -5x previous timestamp instead of my previous statements of +6x and -6x because the "expected" timestamp is 1x, so +6x and -4x is mathematically required. But I want a -5x limit in violation of perfect symmetry out of more fear of greedy +6x occuring than -4x accidents or maliciousness. Reminder: two -5x in a row could cause a negative difficulty if N=8, so there needs to be protection against a negative difficulty.
- Despite 3 and 4, there may be a way to use them to enable D to return to normal more quickly in post-attack. This is really needed because the avg solve time increases (delaying coin release) when there are a lot of large on-off instances because even with low N, D is getting stuck high in post-attack.
**There is no way to stop >50% miners from using the timestamp to make difficulty = 0.**This assumes there is not a trusted third party enforcing a clock (like ETH) which is in violation and Szabo and Satoshi mandates.- Might is right. 51% power is truth.
- 51% (or the trusted 3rd party) controls the clock which means they control coin emission rate.
- Bitcoin uses > 50% consensus to certify not only single-spend transactions but also the time.
- Fear of a hard fork may be what prevents miners from doing this overtly.
**Difficulty algorithms should not have momentum.**Predictive algorithms that look at the slope of recent changes in D to estimate a future D are vulnerable to large on-off miners (and possibly even an accidental and unconscious consortium of miners in search of personal profit) who can force the algorithm into oscillations, turning on when D is low and is starting to rise, and off before it reaches a peak. This is the Derivative part in PID controllers. PI controllers such as the average of the past are safer.**Algorithms that try to protect against specific attack behavior are inherently vulnerable.**- It should be assumed that protection against specific attacks is automatically leaving an unexpected hole.
- If an opponent can see the strategy you've employed that assumes something beyond your scientific observations, he can change his plan of attack but you can't change your defense.
- For example, if you choose a fixed N based on how long you expect attacks to last, the attacker may make the attacks shorter but more frequent.
**Miners acting in their own best interests weed out weak coins, are the mothers of invention, and/or are encouraging adoption of a single coin. Each of these might be "good" instead of an "attack".**- Item 6 may mean all coins that are not the largest for a specific type of hardware are destined for a limitation on their size (if not outright failure) that is more brutal than Zipf's law. We currently see something like Zipf's law in cryptocurrency market caps but if item 5 is correct, it might become 1/Rank^2 or worse in market caps instead of 1/Rank. This enforces Satoshi's original vision that the largest coin's "might is right" will make it less subject to attack than its clones.

## Sunday, June 25, 2017

### Advantage of a simple difficulty algorithm with low N, but a tweak is needed

The following looks like a good zawy v1 to consider. If this is acceptable even when the hashrate is constant, then the switching from high N to low N (zawy v2) is not needed.

- N=6

- The low N causes avg solvetime to be 15% longer (i.e. D is 15% too high), so increase the solvetime average by 15% with 1.15 x avg(6 solvetimes). I believe this is caused by the combination of a low N and the median of a Poisson being 0.75 of the average.

- +/- 5x timestamp limit from previous timestamp.

- D per block change limited to 0.5x down and 2x up.

- The above two should remain symmetrical unless testing supports something different. See last paragraph.

- There should not be any attempt to address bad timestamps other than the above TS and D limits. If a bad time comes in that lowers D to the 0.5 limit then the next correct timestamp will bring it back up. If you're getting two bad timestamps in a row in the same direction very often, the problem is not fixable. Larger N might help.

- Always averages, no medians.

The timestamp manipulation will only help pools with > 50% hashrate. Other miners should keep their clock accurate to keep difficulty more stable when N is low. Otherwise, every miner suffers from an unstable D: an unstable D causes blocks to be released more slowly: the avg of a D ranging from 0.5 + 2 is always more than the avg of a D that stays at 1. I have not addressed this averaging effect and it leaves open a possibility of improvement:

When there are frequent on-off high hash attacks, the above N=6 (and any other algo that provides good protection) will slow the avg solve time. I see a >30% slowing of block release with frequent on-off attacks. Slow increases and decreases are not a problem: the sudden on-off nature of "attacks" slow block release even as they get blocks quickly at low difficulty. So it's a double-whammy to honest miners. This leaves open the ability to reduce the post-attack delays by accelerating the decline back to normal if it thinks an attack was present and is now ending. Off-hand, I would keep a rolling avg D for something like N=100 and if the past N=12 were >2 avg(100 D) then

if Target/(1.15\*avg(6 ST)) < 0.95

change

next D = avg(6 D) x Target/(1.15\*avg(6 ST))

to

next D = avg(6 D) x Target/(1.15\*avg(6 ST)) x 3/4

The above is kind of a weak fix. I am trying to employ a different method, such as more rapidly being able to detect a post-attack event in a continuous fashion. Discrete if-then statements expose the weakest point to an attacker. But the idea is to **show higher skepticism toward allowing D to increase as D gets higher and higher above a longer-term average, but less skepticism towards coming back down.** I think this could be an improvement that is more important that the dynamic avg window.

In the crypto-anarchists worldview, miners taking advantage of the lack of a timestamp and difficulty algorithm not adjusting fast enough are really just capitalists investing in adventures that persuade others to employ fixes.

- N=6

- The low N causes avg solvetime to be 15% longer (i.e. D is 15% too high), so increase the solvetime average by 15% with 1.15 x avg(6 solvetimes). I believe this is caused by the combination of a low N and the median of a Poisson being 0.75 of the average.

- +/- 5x timestamp limit from previous timestamp.

- D per block change limited to 0.5x down and 2x up.

- The above two should remain symmetrical unless testing supports something different. See last paragraph.

- There should not be any attempt to address bad timestamps other than the above TS and D limits. If a bad time comes in that lowers D to the 0.5 limit then the next correct timestamp will bring it back up. If you're getting two bad timestamps in a row in the same direction very often, the problem is not fixable. Larger N might help.

- Always averages, no medians.

The timestamp manipulation will only help pools with > 50% hashrate. Other miners should keep their clock accurate to keep difficulty more stable when N is low. Otherwise, every miner suffers from an unstable D: an unstable D causes blocks to be released more slowly: the avg of a D ranging from 0.5 + 2 is always more than the avg of a D that stays at 1. I have not addressed this averaging effect and it leaves open a possibility of improvement:

When there are frequent on-off high hash attacks, the above N=6 (and any other algo that provides good protection) will slow the avg solve time. I see a >30% slowing of block release with frequent on-off attacks. Slow increases and decreases are not a problem: the sudden on-off nature of "attacks" slow block release even as they get blocks quickly at low difficulty. So it's a double-whammy to honest miners. This leaves open the ability to reduce the post-attack delays by accelerating the decline back to normal if it thinks an attack was present and is now ending. Off-hand, I would keep a rolling avg D for something like N=100 and if the past N=12 were >2 avg(100 D) then

if Target/(1.15\*avg(6 ST)) < 0.95

change

next D = avg(6 D) x Target/(1.15\*avg(6 ST))

to

next D = avg(6 D) x Target/(1.15\*avg(6 ST)) x 3/4

The above is kind of a weak fix. I am trying to employ a different method, such as more rapidly being able to detect a post-attack event in a continuous fashion. Discrete if-then statements expose the weakest point to an attacker. But the idea is to **show higher skepticism toward allowing D to increase as D gets higher and higher above a longer-term average, but less skepticism towards coming back down.** I think this could be an improvement that is more important that the dynamic avg window.

In the crypto-anarchists worldview, miners taking advantage of the lack of a timestamp and difficulty algorithm not adjusting fast enough are really just capitalists investing in adventures that persuade others to employ fixes.

## Saturday, June 24, 2017

### Equations cryptocoin hash and timestamp attacks

I remain confused that alt coins are not having a huge problem with >50% miners constantly forward-stamping the timestamps, causing difficulty to drop to zero. I have the following equations to describe the problems:

Let X = % hashrate a big miner has.

The unavoidable unfixable problem I see is that the difficulty will drop each block for any X > 1:

next D = previous avg D x [1/(1+M/N)]^(2X-1)

M=the "M x TargetInterval" limit on the allowable timestamp, after the previous timestamp.

N=the averaging or median window.

For X=50% note that next D=previous D. See next equation for X<=50%.

For any X>50%, the difficulty will slowly decrease. Small M and large N help, but are not a fix. For example, M=3 and N=100 (the most protective numbers I can imagine), and X=66%, the difficulty will drop 1% per block for as long as the miner is present with 66%. In 450 blocks new D = 1/100th of old D instead of 3xD.

Median makes it more complicated, but I think it is almost exactly the same, at best (if the correct median is used which is 0.75 of the average).

The unavoidable unfixable problem I see is that the difficulty will drop each block for any X > 1:

next D = previous avg D x [1/(1+M/N)]^(2X-1)

M=the "M x TargetInterval" limit on the allowable timestamp, after the previous timestamp.

N=the averaging or median window.

For X=50% note that next D=previous D. See next equation for X<=50%.

For any X>50%, the difficulty will slowly decrease. Small M and large N help, but are not a fix. For example, M=3 and N=100 (the most protective numbers I can imagine), and X=66%, the difficulty will drop 1% per block for as long as the miner is present with 66%. In 450 blocks new D = 1/100th of old D instead of 3xD.

Median makes it more complicated, but I think it is almost exactly the same, at best (if the correct median is used which is 0.75 of the average).

For X<=50%, the difficulty does not drop forever, but has a steady state value that is tolerable:

steady state D = correct D x [1 -(1 - 1/(1+M/N) ) * X]

for example, M=6x, N=30, and X=50% gives 92% of the correct D.

another example: M=6x, N=15, and X=0.5 (33% of hashrate) gives 91%.

I needed this because I want to go to N=8 in the dynamic averaging window:

M=8x and N=8, X=50% gives 75% which is tolerable. This is a basis for TSL setting in Zawy v2 algorithm

steady state D = correct D x [1 -(1 - 1/(1+M/N) ) * X]

for example, M=6x, N=30, and X=50% gives 92% of the correct D.

another example: M=6x, N=15, and X=0.5 (33% of hashrate) gives 91%.

I needed this because I want to go to N=8 in the dynamic averaging window:

M=8x and N=8, X=50% gives 75% which is tolerable. This is a basis for TSL setting in Zawy v2 algorithm

For high on-off hash attacks where the off time >= N and P=on time <=N, I have

blocks stolen at low D = P x [1 -(1-X)/2 - P/2N ]

There is no solution except to make N small.The std dev of solvetimes starts getting noticeable as a drawback if N goes below 12.

blocks stolen at low D = P x [1 -(1-X)/2 - P/2N ]

There is no solution except to make N small.The std dev of solvetimes starts getting noticeable as a drawback if N goes below 12.

This is why I'm working on a variable averaging window difficulty algorithm. N must be as small as possible when there is an attack, but larger at other times because small N causes a lot of accidental variation.

## Friday, June 23, 2017

### Protecting alt cryptocoins from bad timstamps and high hash rates

This is a possible solution to protect small coins against 51% miners who forward-stamp the time.

It appears any miner with 51% can cause difficulty to lower forever. Median and average do not affect this. 1/2 the time he will acquire the median, so half the time he will lower it to 1/6 D, averaging 1/3 D every 2N. So D=0 in 1 day. If you cut the high 6 out of 42, as well as the low 6, the result is the same. Cutting only the high 6 will cause difficulty to rise forever if miners are honest. Again, 51% = truth. 51% = owner of clock. There is no solution except for the coin to self-destruct until miners are honest.

If I am correct, big miners must already know this, so they must be doing it only some of the time because they do not want it to cause too much trouble and then cause a hard fork.

Timestamp fix idea: penalize all miners if some miners have a bad timestamp.

This will protect against a 20x attacker who is trying to control the clock. This numbers are approximate.It depends on how fast rest of difficulty algorithm brings difficulty back down. If a negative block timestamp appears, raise difficulty 10x and lower 11% per block for 20 blocks, then resume regular difficulty. If more than ~5% of network has a bad timestamp (either too far backward or too far forward that causes others to be backward), difficulty would rise indefinitely. Those not aware will stop mining. Those aware of this will remain until the miners with bad timestamps leave. It forces honesty and more stable difficulty. All miners are penalized if some miners keep a bad timestamp. 1% bad timestamps will cause 22% increase every 100 blocks.

Past coin holders are protected against coin dilution.

If miners have an error of either exactly +20 or -20 seconds then 25% of the time a forward stamp will be followed by a backward stamp. 1.24% of solves are within this 40 second window when T=240. 0.25 of 0.0125 = 0.3% of the time this will occur on accident. So everyone with ~20 second error is 1/3 as bad as 1% with 240 second error. These numbers are approximate since the statistics is complicated but it shows the things that have to be considered.

A problem with the above is that a malicious actor with only 1% hashrate can make your difficulty jump 10x once every 100 blocks.

It appears any miner with 51% can cause difficulty to lower forever. Median and average do not affect this. 1/2 the time he will acquire the median, so half the time he will lower it to 1/6 D, averaging 1/3 D every 2N. So D=0 in 1 day. If you cut the high 6 out of 42, as well as the low 6, the result is the same. Cutting only the high 6 will cause difficulty to rise forever if miners are honest. Again, 51% = truth. 51% = owner of clock. There is no solution except for the coin to self-destruct until miners are honest.

If I am correct, big miners must already know this, so they must be doing it only some of the time because they do not want it to cause too much trouble and then cause a hard fork.

Timestamp fix idea: penalize all miners if some miners have a bad timestamp.

This will protect against a 20x attacker who is trying to control the clock. This numbers are approximate.It depends on how fast rest of difficulty algorithm brings difficulty back down. If a negative block timestamp appears, raise difficulty 10x and lower 11% per block for 20 blocks, then resume regular difficulty. If more than ~5% of network has a bad timestamp (either too far backward or too far forward that causes others to be backward), difficulty would rise indefinitely. Those not aware will stop mining. Those aware of this will remain until the miners with bad timestamps leave. It forces honesty and more stable difficulty. All miners are penalized if some miners keep a bad timestamp. 1% bad timestamps will cause 22% increase every 100 blocks.

Past coin holders are protected against coin dilution.

If miners have an error of either exactly +20 or -20 seconds then 25% of the time a forward stamp will be followed by a backward stamp. 1.24% of solves are within this 40 second window when T=240. 0.25 of 0.0125 = 0.3% of the time this will occur on accident. So everyone with ~20 second error is 1/3 as bad as 1% with 240 second error. These numbers are approximate since the statistics is complicated but it shows the things that have to be considered.

A problem with the above is that a malicious actor with only 1% hashrate can make your difficulty jump 10x once every 100 blocks.

## Wednesday, June 21, 2017

### Zawy v2 difficulty algorithm (dynamic averaging window)

========

if ( this_timestamp - last_timestamp > 6*TargetInterval ) {

this_timestamp = 6*TargetInterval + last_timestamp;

}

if ( this_timestamp - last_timestamp < -6*TargetInterval ) {

this_timestamp = -6*TargetInterval +last_timestamp;

}

# next line prevents artificially low timestamp from affecting it.

if ( current minus last timestamp is not negative AND N >=19) {

# if we just saw 19 blocks when we expected 12 go to N=19

# will trigger on average once per 50 blocks on accident

if ( average(last 19 solvetimes) < TargetInterval/1.66 ) { N=19; wait=N; i=0; }

}

# next line prevents 8th solvetime from being artificially large

if ( none of past 7 solvetimes were negative AND N>=6) {

# If we just saw 6 blocks when we expected 12, go to N=6.

if ( average(last 6 solvetimes) > TargetInterval/0.50 ) { N=6; wait=N; i=0;}

}

}

# If we saw 5 blocks when we expected 1, go to N=5. This needs to be last.

# Will trigger about 1 per 250 blocks on accident. Detects >2x hash rate quickly.

# It is one-sided (for rise but not fall in hashrate) it may slow coin release rate a little.

if ( none of past 5 timestamps are negative) {

if ( sum(last 5 solvetimes) / TargetInterval < 1 ) { N=5; wait=N; i=0; }

}

# give outliers a chance to get out of the new avg range assigned above before letting

# N increase but it did nto seem to have large effect. Debating it.

if ( wait > 0 ) { wait=wait-1; }

else { N=N+1; }

Next_D= avg(past N Ds) * TargetTime / avg(past N solvetimes);

# Testing indicated allowing more than 1+2.5/Nstart increases and less than

# 1- 2.5/Nstart decreases did not help, where N=minimum. For Nstart=5

# these are 1.5 and 0.50 limits.

if ( Next_D > 1+2.5/Nstart * avg(past N Ds) ) { Next_D=1+2.5/Nstart*avg(past N Ds)

if ( Next_D < 1-2.5/Nstart* avg(past N Ds) ) { Next_D=1-2.5/Nstart*avg(past N Ds)

==============

I have what I think could be an excellent algorithm for all coins that would be especially beneficial to small coins, but it's a little difficult to understand. I would like sumokoin to eventually employ it so that monero and cryptonote can believe in it. They are currently showing an interest [here](https://github.com/monero-project/research-lab/issues/3#issuecomment-309427606) and [here](https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a#commitcomment-22628026) but they will lose interest if we are not able to demonstrate something better on a live coin.

I want to demonstrate it in a spreadsheet today or tomorrow that it is noticeably superior, at least when timestamps are accurate. Then I want to harden it against timestamp errors as in the previous pseudocode. The long post at cryptonote above describing "Zawy v1.0" is my recommendation until this continuous version is finished.

In general, I want it to check for sudden hashrate increases or decreases and switch to the correct N if it detects an unlikely event. I want it to do it continuously, minimizing any constants that people select. Actually, people should not select any constants in cryptocoins. For example, total coin and coin release rate should not be set by devs, but by the marketplace. Likewise, the following will let the "market" of "network hashrate" "bid" for the correct N averaging time.

```

# This demonstrates the idea. I tested it last year.

# This is to be substituted in place of the 3 conditionals in previous psuedocode.

# I'll employ it in a spreadsheet to prove it it's better than simple averaging.

# It will not work well until it is hardened against timestamp errors.

# The previous pseudocode is hardened against timestamp errors and

# shows generally how the following needs to be changed

Nstart = 5 =minimal number of blocks we will check for statistical event

Nend = 36 = max number of blocks we will check for statistical event

# Go to N averaging = Nstart=5 only ~1% of the time by chance alone

STDEVstart = 4;

# Go to N averaging = Nend=36 32% of the time by chance alone

STDEVend = 1

# Now create a function using the above that will determine the amount of

# statistical significance we require before we switch to an N averaging that

# is between Nstart and Nend.

# I'll Substitute the above assigned values for clarity.

function STDev(NA) = 4 - (4-1)/(36-5)*(NA-5)

N=current number of blocks used in averaging, determined in previous code

# Test all past-block-ranges for a statistical event, from 5 to N

for NA=-Nstart to -N

NE= N_Expected_blocks_in_NAs_time= sum(NA previous solvetimes) / TargetInterval

S = STDev(NA)

NH = an NA above this should not have occurred in NE time within bound of STDev

NL = an NA below this should not have occurred in NE time within bound of STDev

NH = NE + S*SQRT(NE)

NL = NE - S*SQRT(NE) +1 # the +1 was needed in testing to make it symmetrical.

if ( NA > NH or NA < NL) {

# throw out earliest blocks in case they were before the attack or outliers. The +2

# prevents thowing out 2 out of 5. +3 might be better.

N=int(2*NA/3+2)

exit for loop, last NA;

}

}

```

Here are the results for 10x hashrate attacks. "Blue" areas indicate blocks were obtained at a low difficulty. Black area that is not on top of "blue" are blocks obtained at a costly difficulty. Everything is correct when black is on top of blue.

The "Zawy v2" algorithm is almost exactly like the theory-based pseudo-code which shows it is based on good theory. N=30 is not shown because the thin high-hash rates are only 15 blocks wide and N=30 does not give good results on N=15 attacks. Before the attacks, you can see a gentle rise to 2x hashrate and a drop down to 1/2 hashrate. Default hashrate and difficulty = 1, so the scale is accurate.

**edit:**

- with zawy v1 N=18 with these hash attacks, 10% of blocks were gained at < 1/2 the appropriate difficulty (attacker) and 10% suffered >2x the appropriate difficulty (constant miners).

- with zawy v2 it was 4% and 7%

- since the v2 dynamic averaging period went as low as 5 and there were 6 attacks in 970 blocks, 6x5/970 = 3% makes sense, as well as 6x18/970 = 11% for v1. The 7% is a little high because there is a 1440 limit on the timestamp that prevents long solve times for affecting it. (it's to prevent timestamp manipulation from forcing difficulty low)

## Sunday, June 18, 2017

### Difficulty algorithm for everyone

- I'd like to summarize my thoughts on difficulty algorithms because people are interested in my voluminous comments at Zcash (and now Sumokoin). I have not programmed in C, but I think I have a really good understanding of the problems and the solutions. The following is what I'd like programmers to know before they work on a difficulty algorithm.Someone with >50% or at least >75% hashpower has the ability to control your timestamp (if you do not use an trusted third party like trusted nodes or a timestamp service). Being able to control the timestamp enables one to control how fast or slow the blocks are released because control of time is control of difficulty.

These are not "timewarp" attacks as the term was first used (search for a bitcointalk post on timewarp). As with 51% attack, a hard fork is the only solution. When there is no trusted third party, might is right.There are 6 goals we want from difficulty:- smooth difficulty if hashrate changes slowly. A tradoff with 2). Large N solves this.
- responsive to hashrate changes. A tradeoff with 1). Small N solves this.
- protection against timestamp manipulation. This is fixed if 4) is <50%.
- protection against high on-off-on-off hash rate changes. See 2). Even 10x can be dealt with if they are not 6)
- blocks solved on schedule, at least on average. Same as 1) and 2) done correctly, if 6) does not occur.
- protection against >75% hashrate miners who know how to manipulate timestamp. There is no fix [edit: actually, see bottom of this comment for what can help]. They can get blocks at low difficulty (they break 3) and make blocks release too quickly (they break 5)

A choice has to be made between smoothness and quick response and timestamp manipulation protection and hash attack protection. Any hash attack protection will sacrifice timestamp protection and vice versa.[edit: see bottom of this comment for a way to get both 1) and 2), that is, a way to make difficulty very smooth but very responsive to sudden hash rate increase. ]The following is what I think is the best diff algo for basically everyone to achieve the above. All of it except for the basic equation is about how to deal with not having a valid timestamp.- next Diff = Avg past N Diff * TargetInterval / Avg past N solve times
- Limit timestamps to 6x TargetInterval of previous timestamp. This prevents timestamp manipulation from forcing a low diff. Maybe 5x would be OK and provides more protection.
- N seems reasonable from N=9 to N=100 but try to make N less than the time-length of flash-hash-attacks. They should do the attacks for N/2 and then go away for N to let others suffer the increase in difficulty and then come back to do it again. I would not be surprised if a big miner has a list of his 3 or 4 or 5 favorite small coins with similar averaging times and then just cycling through them for N/2 or less. There is no fix other than lowering N which should make it more problematic since there should be some time penalty to constantly having to start and stop mining. Any clever fix that goes by some fancy difficulty name will create a bigger problem. Smaller N is the only fix.
- If you really don't care about difficulty being either responsive or stable (you can't have both) and any old N is good then N=30 is a nod to statistics with about 1/SQRT(N) = +/-18% expected error in the average.
- Allow negative timestamps and solve times but protect against negative or large increases in difficulty.Negative solve times in the average allows honest timestamps to counteract bad timestamps. This results in negative solve times in the "averaging". We want them, but we don't want the average to come out negative that would make the next difficulty negative. If there are so many the next Diff is negative, someone with very high hash power like > 60% of the total is trying to break the algo with forward timestamps or it is after an attack that forced a really low diff with forward timestamps and now honest miners are trying to get back on time. So limit increase in D to something like 15% with D=1.15*avg(D) if avg(solve times) < 0 because right before it goes negative, it is trying to get a really high D by something like avg(D) / 0.01. My suggestion for the limit on the "per bock" increase and decrease:
- max change up or down in diff/block = M^(1/N) where M is max expected hashrate change up or down. The limit can be a lot higher or lower than this. It's really just meant to prevent a timestamp manipulationcatastrophe as decribed below, not to improve performance.
- the classic "timewarp" attack as described in a bitcointalk post is not possible for coins with rolling averages.
- you may see discussions of allowing median instead of averaging or using a combination. I suggested it but now I think only average should be used. I previously thought median was needed to protect against "timewarp" attack, but now I do not think that is correct.
- Detailed explanation of why limits on D increase are needed: If timestamps are accurate and 50x hash power begins, the per-block increase in D is only 50^(1/N) = 1.14 = 14% increase per block for N=30. So a limit in the increase of D of 50^(1/N) is a security precaution: if you have N=9 and someone with 10x hashpower comes in and wants to break the coin, then he can get 9 blocks in a row and assign timestamps 1 second apart. On a targetinterval of 600 D would go 600x higher after the 9th block. On his first block, before he begins the 9, he would assign a large forward time so that after his 8th block of adding only 1 second per block, difficulty has risen only something like 200% so he will still be able to get the 9th block. Then after the 9th block it will take his 10x hashpower 600 x 2 / 10 = 120 target block times to get a 10th block. If he stops, then it's 1200 block intervals (8 days) before someone gets the next one and because of the limits on timestamps (or a limit on a D decrease), it could take almost 8 x N days to get the next N blocks. Or he could have added 0 seconds to each block and then 1 second to the 9th one so that a <= 0 line of code is not triggered and this would cause 1200 x 9 delay until the next block is found.
- I believe removing outliers in solve times is not good. The +/- 6x limit on assigning timestamps does this, but it is looking a the values rather than blindly removing the 6 highest and lowest.
- I have previously argued for median and no limits on the rise and fall of D. This supersedes those errors.
- Another possibility that works pretty good for N>~15 is to use a square: next D = avg D x (Target time / avg Solve time)^2 This makes D more responsive, but volatile. I think of it as "a hornet's nest" especially with low N. Like hornets, it can sacrifice itself, resulting in D that is 2x more (or 50%) less than if you had used larger N and no square. But the averages come out OK and anyone attacking it with high hash rate will get stung. It really may not be a lot different than just going to a lower N, like nearly as low as N/2.
- There is an oscillation in averaging methods which indicates there might be something better, but after a lot of work, I can't find anything.

Addendum: Zawy v2.0All the above can be called Zawy v1.0. Adding the following can called Zawy v2.0Achieving both goals 1 and 2

Achieving goal 2, protection against hash attack, always reduces goal 3 and 6, protection again timestamp manipulation

(edit: see later posts for an implementation of this)

To make diff smooth but very responsive (solving both goals 1 and 2), here's an advanced method: use N=60 or something, but keep checking the most recent N=12 and if it changes so much that it only had a 1% chance of changing that much 1% of the time (there were <=4 or >=20 blocks when 12 were expected), then you switch to N=12 and keep N=12 for 12 blocks to let any outliers fall out of the average, then let N slowly rise back to N=60. This means that if 4 occur, then diff drops from 100 to 33 in 1 block. If 20 occur then it rises to 100 x 20/12 = 166 in 1 block. This means the M^(1/N) limit needs to be changed to all 1.66x on the rise and 0.33x on the fall. A drop to 1/3 in 1 block is frightening and it will occur on accident, but the statistics is good. Changing it to 1/1.66 = 0.60x seems reasonable and violates symmetry in the statistics for only 1 block.Solving problem 6

Addressing goals 3 and 6 always conflicts with goal 2. I will try to address as good as I can in Zawy v2, but the priority is goal 1 and 2.

If they come online with 3x hashrate (75% of new total) and assign the 6x timestamp limit always, the difficulty will keep dropping. At 50%, the difficulty will be correct. But anything above 50% will keep dropping. But if you allow +5x and -20x timestamps then they will need over 4x hashrate to make the difficulty keeping dropping. If they assign -20x all the time they can make the difficulty keep rising to I believe 20x their hashrate, if they want to waste that much computer power to slow your coin release (they have to stick around for as much as 20xN). My calculation indicates 5x instead of 6x with -20x multiplier means blocks will be issued 4% faster (5x) instead of 1.7% too fast (6x) due to the lack of symmetry. Sometimes they're should be >5x and < -5x but with the >5x block, the <-5x takes some advantage and makes the difficulty a little bit too low.

## Tuesday, June 13, 2017

### Difficulty algorithm and time warp attacks

I spent a lot of work on trying to figure out the best difficulty algorithm for Zcash and two other coins have run into my comments. This is a summary of what I found out. It was a comment on github. Frankly, I can't believe the amount of work people have spent on designing difficulty algorithms when they should have just been using a simple average, and I'm glad to see coins are starting to use the simple average, and would like to think it is due to my work.

=======

where Zcash uses N=17. N is not too critical. Larger N is less responsive to network changes but more stable.

I think Zcash retained some semblance to Digishield v3 by retaining a limit on the increase and/or decrease. This caused oscillation problems in Digishield V1, V2, and Zcash testnet. I was arguing for them to remove those limits completely. I think they made them high enough that they are mostly irrelevant. But it's better and more scientific to just stick to the observed average. If a big pool (10x hash rate increase) jumps on for an hour and then off, the penalty of everyone having to wait 10x longer for the next 8 blocks is the best option. If a limit is placed on the difficulty rise, they get blocks cheaper for longer and too many coins will be released per day (assuming you do not make other miners pay the price for the ill-gotten gains by making the fall equally slow). The core problem is the lack of a valid timestamp. Change the word "median" in the equation to "avg" if you're not like bitcoin and Zcash and are somehow getting a known timestamp.

If you're using 150 seconds between blocks, the maximum-allowed miner-reported time between blocks should be 1/4 of bitcoin's to minimize time-warp attacks. I think Zcash did not change the max time from bitcoin (3600 seconds?) even though their blocks are coming 4x faster. Time warps seems to be a danger only when there's a potential big miner (>15% network hash rate). A big miner would want to cycle through small coins so that he has no down time while getting coins with a artificially low difficulty. It's made a little easier if they're all using N=17. Otherwise it seems only beneficial if his primary expense is computing time or electricity.

Just sticking with Zcash is probably fine, but I hate seeing people refer to Digishield because it implies limits on the rise and fall when it's the simple "scientific" average that works. Limits on the rise and fall are a political and ideological influence, wanting to prevent users from suffering long transaction wait times. But it's a violation of the science that has a cost elsewhere (either the coin is released too fast, or small miners suffer). To be more "scientific" N=30 would be good. It would be more stable but less responsive. That's better than keeping a limit on the rise but has a similar effect.

================

Your equation:

Next = (avg past 17 Diff) x 240 / (A + 1/4x(A - M))

is same as

Next = (avg past 17 Diff) x 240 / (1.25xA - 0.25xM)

Avg of past 17 or 16 Diff does not make much difference but if you're dividing by M and A of 16 then I would definitely also use 16 for Diff.

I don't like it because of the heavy reliance on the Avg for the reasons I gave in a previous post that describes the effect of using A instead of M. I like my previous equation better (at the bottom of this post I'll explain a problem with this and give another idea):

with limits:

Max timestamp = Median-(N/2+0.5+6 )x T

and

Min timestamp =Median+(N/2+0.5+6) x T

The 0.5 is because Median is used. It's negative is N is odd.

T= target time = 240 in this case.

From deep thought over many days I came to the conclusion that the best when there is no timewarp attack is

Next Diff = Avg N Diff x Target / (Avg N SolvesTimes)

Timewarp (timestamp error) necessitates median and limits. The median is ugly, but it is as close to avg as you can get so I strongly favor the median. I saw how "insane" digishield v1 and v2 was in using rise and fall limits on Diff, so I'm trusting myself more than (1.25A-0.25M). My equation with the 3/4 is wrong, but I was thinking the code was using (0.75xM+0.25(most recent)) which seems OK but I still like the bold equation best.

The problem with my equation above is that it ignore a sudden increase in hashrate for 8x240 and then suddenly responds. So a smart big miner will want to come in for 8 blocks, then leave for 8 or 16 blocks to go to another alt, then come back. I don't know if they actually do that. Maybe N=12 is good, but it will have a lot of accidental variation. Here's an alternative in order to be more responsive and cut time warp profit in half

where M and A are median and average of last N block solve times.

Bitcoin takes forever in responding to hashrate changes, so median having an 8 block delay is a lot better than bitcoin even if it's not responding fast.

So my desired T/A and your T / (1.25A-0.25M) are basically the same thing but my timewarp argument is against these. Str4d on Zcash told me they used T/M instead of T/A because of timewarp and I had to think about why and came up with the explanation above. Since I can't read their code, maybe they actually use the same as you because they have that 1/4, so maybe they really do use something close to T/A.

I think p and q = 0.5 is fine. Your current algo might be fine. But be sure to use 6xT as a limit, and allow the negatives.

An idea I have that also slows responsiveness but dulls the effect of bad timestamps is to use the same idea that "the avg of truth and lies is better than risking two lies in a row" when measuring block solve time. Instead of block solve time = (current timestamp) - (previous timestamp) an idea I am just "throwing out there" is:

block solve time = (current timestamp) - [ (Avg of past N/2 timestamps) + (N/4) x T ]

So this skeptically dulls any extreme high or low solve times. The 6x limit would be enforced on the output of this equation, not the input.

======

After much consideration, I strongly recommend a simple average of the most recent solve times. By increasing N, you get stability at the expense of responsiveness. Digishield was using N= 17 or N=18 in a more complicated way. Last I checked, this is what Zcash was doing (N=17) but they might be doing something different. People still refer to them as using a "modified digishield v3", but it's just an average that I believe came as a result of my pestering them about it.

Next difficulty = (avg last N difficulties) x (target solve time) / (avg of past N solve times)

As a result of improper times being reported by miners, the "average" solve time is not used, but the median is. If you have accurate solve times, use the average.

I could not confirm the Zcash code was getting a good median (I could not trace the variables back to their origins), but it seemed to work.

The lack of a known time allows time-warp attacks especially when mining is concentrated. It was observed in the Zcash testnet. To help minimize this, you limit how far ahead or behind the previous time that the miner can report their time. I believe Zcash made the limit same as bitcoin, 3600 seconds, but the limit should have been 1/4 of bitcoin's since their blocks come 4x faster. 4x more blocks in that time limit means more opportunity for a time warp attack. The attack seems to apply only when mining is concentrated, having at least maybe 20% of network power. Also, it is used if electricity or computer time is their main expense. They crunch a lot to get maybe up to 1/2 the blocks in 1/2 of N, so that the calculated median is artificially old because they are reporting old times to trick the algorithm into lowering the difficulty. They continue for another 1/2 of N which is when the algo finally figures it out, thanks to the limit of how far back they can set times. Then they stop to let others suffer a high difficulty for 1 N, then they do it again. I think that's a good summary.

This is not to say a smaller N prevents or reduces a time warp attack. The median and the limit on the reported time helps reduce the possibility of it occurring. I'm describing the attack in order to check for it by watching the times being reported and to see if it makes the difficulty oscillate. You look for a series old times being reported on the scale of 1/4 to 1 N , then it stopping for a while to be repeated later.

The problem with many difficulty algorithms is that they try to "think too much" instead of just looking at recent data, and going from there. If you try to look at a recent increase in hash rate and let the difficulty "jump ahead" of the actual recent average in a predictive manner, then it invites oscillations, both natural and intentional. A similar thing occurs if you try to limit the increase or decrease in the difficulty, like Digishield did, especially if the limits are not the same (symmetrical). Just looking at the avg of the most recent past is the most scientific method.

You may be fully aware of all the above, but I wanted to distill what I had found out in the past.

=======

The above does not explain this exactly. This post to bitcointalk is a little more clear on time warp attacks.

where Zcash uses N=17. N is not too critical. Larger N is less responsive to network changes but more stable.

I think Zcash retained some semblance to Digishield v3 by retaining a limit on the increase and/or decrease. This caused oscillation problems in Digishield V1, V2, and Zcash testnet. I was arguing for them to remove those limits completely. I think they made them high enough that they are mostly irrelevant. But it's better and more scientific to just stick to the observed average. If a big pool (10x hash rate increase) jumps on for an hour and then off, the penalty of everyone having to wait 10x longer for the next 8 blocks is the best option. If a limit is placed on the difficulty rise, they get blocks cheaper for longer and too many coins will be released per day (assuming you do not make other miners pay the price for the ill-gotten gains by making the fall equally slow). The core problem is the lack of a valid timestamp. Change the word "median" in the equation to "avg" if you're not like bitcoin and Zcash and are somehow getting a known timestamp.

If you're using 150 seconds between blocks, the maximum-allowed miner-reported time between blocks should be 1/4 of bitcoin's to minimize time-warp attacks. I think Zcash did not change the max time from bitcoin (3600 seconds?) even though their blocks are coming 4x faster. Time warps seems to be a danger only when there's a potential big miner (>15% network hash rate). A big miner would want to cycle through small coins so that he has no down time while getting coins with a artificially low difficulty. It's made a little easier if they're all using N=17. Otherwise it seems only beneficial if his primary expense is computing time or electricity.

Just sticking with Zcash is probably fine, but I hate seeing people refer to Digishield because it implies limits on the rise and fall when it's the simple "scientific" average that works. Limits on the rise and fall are a political and ideological influence, wanting to prevent users from suffering long transaction wait times. But it's a violation of the science that has a cost elsewhere (either the coin is released too fast, or small miners suffer). To be more "scientific" N=30 would be good. It would be more stable but less responsive. That's better than keeping a limit on the rise but has a similar effect.

======

To clarify and correct another post to github
I think N from 14 to N=50 might be OK. I would select N=30 to make difficulty more predictable for everyone and there is a statistical standard in letting 30 be a cut-off. A lot of research is immediately rejected if N < 30. Large N also makes it more obvious a big miner is present. It does not theoretically affect big-miner "cheating" because if N is small, they just come back more often. They profit during N/2 then have to go away for N/2 to wait for the median to go back to normal. Since it's the avg of N difficulties, it is better for them to wait N. This is without a time-warp attack. It's just what they can do if they have a large percentage of hashrate. There is no fix. It's only profitable if they can easily switch what coin they are mining, or if computer time or electricity are largest expense and it's OK to let their equipment go idle (unlikely).

I think 6x the target solve time to copy bitcoin should be the maximum difference in the timestamps. 6x4x60 = 1440 seconds. I think it is risky to adjust the median by combing it with the average, if I understand your comments.

I've said N value is not critical, but it may interact with the 6x. I can't figure out if N=17 or N=30 is better if the limit is 6x the 4 minutes. So I'm not sure N=30 is better.

A timewarp attack can be seen if timestamps are set as far ahead in the future as possible to cause the difficulty to drop, then they'll mine full power and set timestamps as low as possible. The difficulty will stay low until the median time-to-solve catches up to the forwarded time. This is why 6x is loses half as many blocks to the attacker as 12x. You can see the actual hashrate of the attacker by looking at how fast block solves are coming into your personal node. If he has 2x the network hashrate, you'll see solves coming in about 2x faster than your 4 minutes.

I think other miners are paying the price when someone has a lot of hash power and is cycling through several alt coins when it sees a low difficulty. So the other miners have to suffer a higher difficulty when they leave, and do not benefit when the difficulty is low. But it's hard to call it an attack. The big miner profits from the difficulty accidentally being low, and their actions cause the difficulty to rise and fall more adding excess variablility to solve times. If the big miner stayed around all the time, the other miners would get the same amount. So the other miners are paying the price in the sense that if they have 1% of average network hashrate, they get something like only 0.66% of the blocks if the big miner mines N/2 and then waits N to return. So he needs to find 3 small alt coins with N=17. Again, this is not related to a time warp attack.

I may have made an error in my previous post: The timewarp attack may only cause more coins to be issued than the protocol wants (not really harming other miners directly). It's an attack on the value of everyone's coins, not on other miners, unless they are combing it with cycling through to other coins. It's an "attack" because it is lying about the time. It's an unsolvable deficiency in the protocol in not being able to know the real time itself. It's unsolvable without relying on a trusted third party, like a group of peers it trusts (a trusted consensus) which I believe is what ETH does to get a an accurate timestamp.

The following is an actual timewarp attack on the Zcash testnet. The first chart is the rate at which blocks are being issued. The target is 24 per hour. You can see too many blocks were issued during the attack. The second chart shows a positive spike when the timestamp was set >2000 seconds into the future from previous block. It shows a negative spike when they blocks had timestamps less than 10 seconds apart. The darkness of the downward spikes shows they got a lot of blocks quickly. They stopped when the difficulty returned to normal.:================

Your equation:

Next = (avg past 17 Diff) x 240 / (A + 1/4x(A - M))

is same as

Next = (avg past 17 Diff) x 240 / (1.25xA - 0.25xM)

Avg of past 17 or 16 Diff does not make much difference but if you're dividing by M and A of 16 then I would definitely also use 16 for Diff.

I don't like it because of the heavy reliance on the Avg for the reasons I gave in a previous post that describes the effect of using A instead of M. I like my previous equation better (at the bottom of this post I'll explain a problem with this and give another idea):

**Next Diff = (avg last N Diff) x T / (Median of past N solve times)**

with limits:

Max timestamp = Median-(N/2+0.5+6 )x T

and

Min timestamp =Median+(N/2+0.5+6) x T

The 0.5 is because Median is used. It's negative is N is odd.

T= target time = 240 in this case.

From deep thought over many days I came to the conclusion that the best when there is no timewarp attack is

Next Diff = Avg N Diff x Target / (Avg N SolvesTimes)

Timewarp (timestamp error) necessitates median and limits. The median is ugly, but it is as close to avg as you can get so I strongly favor the median. I saw how "insane" digishield v1 and v2 was in using rise and fall limits on Diff, so I'm trusting myself more than (1.25A-0.25M). My equation with the 3/4 is wrong, but I was thinking the code was using (0.75xM+0.25(most recent)) which seems OK but I still like the bold equation best.

The problem with my equation above is that it ignore a sudden increase in hashrate for 8x240 and then suddenly responds. So a smart big miner will want to come in for 8 blocks, then leave for 8 or 16 blocks to go to another alt, then come back. I don't know if they actually do that. Maybe N=12 is good, but it will have a lot of accidental variation. Here's an alternative in order to be more responsive and cut time warp profit in half

**Next Diff = (avg of last N Diff) x T / (1/2 M + 1/2 A)**

where M and A are median and average of last N block solve times.

Bitcoin takes forever in responding to hashrate changes, so median having an 8 block delay is a lot better than bitcoin even if it's not responding fast.

So my desired T/A and your T / (1.25A-0.25M) are basically the same thing but my timewarp argument is against these. Str4d on Zcash told me they used T/M instead of T/A because of timewarp and I had to think about why and came up with the explanation above. Since I can't read their code, maybe they actually use the same as you because they have that 1/4, so maybe they really do use something close to T/A.

I think p and q = 0.5 is fine. Your current algo might be fine. But be sure to use 6xT as a limit, and allow the negatives.

An idea I have that also slows responsiveness but dulls the effect of bad timestamps is to use the same idea that "the avg of truth and lies is better than risking two lies in a row" when measuring block solve time. Instead of block solve time = (current timestamp) - (previous timestamp) an idea I am just "throwing out there" is:

block solve time = (current timestamp) - [ (Avg of past N/2 timestamps) + (N/4) x T ]

So this skeptically dulls any extreme high or low solve times. The 6x limit would be enforced on the output of this equation, not the input.

======

What I've described may not be a timewarp attack. Here is a post that says it requires ignoring the interval between two blocks like bitcoin does which means it depends on not using a rolling average. In their example, they reset difficulty every 3 blocks. We are talking about a rolling average, so it's really different.

My description of the timewarp might be all wrong. Let's say and honest timestamp on block 1 is 0. The attacker takes ~240 but assigns 1440 for block 2. Then miner with an accurate takes ~240 and assigns ~480 for block 3 because it is two blocks after block 1. But the block solve times you have are 1440 and (480 - 1440) = -960. So the average of the two is 240. So maybe the M should not be used. I would just use A.

My "avg of a larger number of truth and lies is better than risking two lies in a row" is referring to a method of how to enforce the 1440 limit, or how to assign the block solve time. But because of how it works out nicely above, maybe that should not be tried either.

The problem I see with median is that it does nothing for N/2 then suddenly reacts. The output I'm seeing from it is not really good. The 1.1 factor has to be 1.5 if it's 100% median. As far as I know, I was completely wrong in sending you guys down the median path. p=1.25 and q=-0.75 has to use 0.9 correction.

time to 1 occurrence = -240*ln(rand()) is the random time of a single occurrence that will result from a Poisson distribution that has a mean of 240. Poisson is simply based on a standard random variable, so RAND() is what should be used. Derivation is below.

To include the 1440 limit I had to create another column F with =if(A1>1440, 1440, A1). Then let the median and average columns point to column F instead of A, but keep my measurements of the output and charting on column A. For example, to know if the solve time is averaging 240, do an average on all of column A.

I'm still working on trying to get the oscillations out of these averaging methods. I'm hoping the reason they seem to oscillate with 2N intervals is because the D x T / A method assumes a linearity and symmetry by using the A and that this is a false assumption in Poisson distributions.

Derivation:

I was lucky to find -ln(rand())*240 on the internet and I did not see a derivation, so I'll give one here.

I was lucky to find -ln(rand())*240 on the internet and I did not see a derivation, so I'll give one here.

The Poisson distribution is based strictly on the same basics as Gaussian and binomial. Gaussian is a continuous version of binomial. Binomial is a coin toss. Poisson is Gaussian in a sort of backwards way and summed up. That is why a simple formula like this can apply. The Poisson is not esoteric, but a direct "consequence" of this simple formula,

time to 1 occurrence = -T*LN(RAND())

which is apparently all you need to know to derive all Poisson results. But I want to do it backwards from wikipedia.

time to 1 occurrence = -T*LN(RAND())

which is apparently all you need to know to derive all Poisson results. But I want to do it backwards from wikipedia.

From Wikipedia Poisson 1st equation

P of k occurrences when on average there is L = 1 occurrence per some T = e^-L * L^k / k!

L=mean occurrences per interval = 1/240

k=observed occurrences in L

L can be use decimals but k must be an integer.

P is a random variable that is between 0 and 1.

This equation is also the "probability mass function".

P of k occurrences when on average there is L = 1 occurrence per some T = e^-L * L^k / k!

L=mean occurrences per interval = 1/240

k=observed occurrences in L

L can be use decimals but k must be an integer.

P is a random variable that is between 0 and 1.

This equation is also the "probability mass function".

We want random times as they vary around 240 with k=1. So the equation simplifies to

P = e^(-L) * L^1 / 1

Assume the mass function nature of it allows me to substitute

L=L*t and therefore

P(t) = e^(-Lt) * Lt

Now integrate P from t=0 to t=t.

With an integration table and log rules, the integral yields

Integral(P)/t = e^(-t/T)

Notice that the left side is an average P expected for a given t when k=1. Rearranging:

t= -T * ln( avg P per t)

Which is the equation I seek: RAND() appears to supply an average P per t. This makes sense but avg(P/t) is a weird kind of ratio.

P = e^(-L) * L^1 / 1

Assume the mass function nature of it allows me to substitute

L=L*t and therefore

P(t) = e^(-Lt) * Lt

Now integrate P from t=0 to t=t.

With an integration table and log rules, the integral yields

Integral(P)/t = e^(-t/T)

Notice that the left side is an average P expected for a given t when k=1. Rearranging:

t= -T * ln( avg P per t)

Which is the equation I seek: RAND() appears to supply an average P per t. This makes sense but avg(P/t) is a weird kind of ratio.

================

The best I can find after trying many things is this:

next D = Avg of D x (T/A)^2

The cycles seem less and it responds to attacks a lot faster. It varies more but it does not go as low. It is kind of a hornet that I would think about twice before attacking. By not following other diff algos, attackers would not know as well what to do with it. It causes the difficulty to jump about 2x higher on occasion than without the ^2.

next D = Avg of D x (T/A)^2

The cycles seem less and it responds to attacks a lot faster. It varies more but it does not go as low. It is kind of a hornet that I would think about twice before attacking. By not following other diff algos, attackers would not know as well what to do with it. It causes the difficulty to jump about 2x higher on occasion than without the ^2.

For constant hash rate, the frequency of of seeing solve times that are 240/2 is 6x more often than seeing solve times that are 240 x 2. The averages reduce this effect, but I believe this could be what causes the oscillations. I can't find a solution. I've done curve fitting to adjust each solve time before applying the average and tried adjusting the average, but I can't do better than the simple average. I've tried about 10 other things, using adjustable constants in many different ways.

N needs to be > than the length of time a flash attack occurs. They get N/2 for free.

Bitcoin timestamps:

Bitcoin wiki says:

Bitcoin wiki says:

A timestamp is accepted as valid if it is greater than the median timestamp of previous 11 blocks, and less than the network-adjusted time + 2 hours. "Network-adjusted time" is the median of the timestamps returned by all nodes connected to you. As a result, block timestamps are not exactly accurate, and they do not even need to be in order. Block times are accurate only to within an hour or two.

So they have something like a 6x to 12x. timestamp limit. I believe the method I've described is better (and is probably more strict so that if the same rule applies to cryptonote using BTC code, then my limits will override BTC). The reason I think it is better is because a manipulating of the timestamp may also have better control of the nodes connected to it and therefore the timestamp variation they will allow. So an honest miner with different node connections would not be able to reverse the bad timestamp as in my scheme.

Some knowledgeable says "if you have an accurate timestamp, you do not need mining" and people like Szabo have referred to using block height as their clock. I think if every peer has an accurate clock, then updating a distributed database without update conflicts ("double payment") is easy. For example, you could update the blockchain every minute and anyone wanting to make an update would have the first 15 seconds to submit changes and if you tried a "double payment" on the same coin in that 15 seconds then both payments would be rejected. The only reason GPS and time servers are not used is because people didn't want to rely on them. But nodes could use star trackers to occasionally validate their GPS signal and revert to the star trackers to correct their CPU time if GPS time failed and use 10 minute block time if phone cameras and accelerometers are limited to 2.5% accuracy. Tracking the stars for economic reasons goes back as far as knowing when to plant.

Subscribe to:
Posts (Atom)