Sunday, June 18, 2017

Difficulty algorithm for everyone

  • I'd like to summarize my thoughts on difficulty algorithms because people are interested in my voluminous comments at Zcash (and now Sumokoin). I have not programmed in C, but I think I have a really good understanding of the problems and the solutions. The following is what I'd like programmers to know before they work on a difficulty algorithm.
    Someone with >50% or at least >75% hashpower has the ability to control your timestamp (if you do not use an trusted third party like trusted nodes or a timestamp service). Being able to control the timestamp enables one to control how fast or slow the blocks are released because control of time is control of difficulty.
    These are not "timewarp" attacks as the term was first used (search for a bitcointalk post on timewarp). As with 51% attack, a hard fork is the only solution. When there is no trusted third party, might is right.
    There are 6 goals we want from difficulty:
    1. smooth difficulty if hashrate changes slowly. A tradoff with 2). Large N solves this.
    2. responsive to hashrate changes. A tradeoff with 1). Small N solves this.
    3. protection against timestamp manipulation. This is fixed if 4) is <50%.
    4. protection against high on-off-on-off hash rate changes. See 2). Even 10x can be dealt with if they are not 6)
    5. blocks solved on schedule, at least on average. Same as 1) and 2) done correctly, if 6) does not occur.
    6. protection against >75% hashrate miners who know how to manipulate timestamp. There is no fix [edit: actually, see bottom of this comment for what can help]. They can get blocks at low difficulty (they break 3) and make blocks release too quickly (they break 5)
    A choice has to be made between smoothness and quick response and timestamp manipulation protection and hash attack protection. Any hash attack protection will sacrifice timestamp protection and vice versa.
    [edit: see bottom of this comment for a way to get both 1) and 2), that is, a way to make difficulty very smooth but very responsive to sudden hash rate increase. ]
    The following is what I think is the best diff algo for basically everyone to achieve the above. All of it except for the basic equation is about how to deal with not having a valid timestamp.
    • next Diff = Avg past N Diff * TargetInterval / Avg past N solve times
    • Limit timestamps to 6x TargetInterval of previous timestamp. This prevents timestamp manipulation from forcing a low diff. Maybe 5x would be OK and provides more protection.
    • N seems reasonable from N=9 to N=100 but try to make N less than the time-length of flash-hash-attacks. They should do the attacks for N/2 and then go away for N to let others suffer the increase in difficulty and then come back to do it again. I would not be surprised if a big miner has a list of his 3 or 4 or 5 favorite small coins with similar averaging times and then just cycling through them for N/2 or less. There is no fix other than lowering N which should make it more problematic since there should be some time penalty to constantly having to start and stop mining. Any clever fix that goes by some fancy difficulty name will create a bigger problem. Smaller N is the only fix.
    • If you really don't care about difficulty being either responsive or stable (you can't have both) and any old N is good then N=30 is a nod to statistics with about 1/SQRT(N) = +/-18% expected error in the average.
    • Allow negative timestamps and solve times but protect against negative or large increases in difficulty.Negative solve times in the average allows honest timestamps to counteract bad timestamps. This results in negative solve times in the "averaging". We want them, but we don't want the average to come out negative that would make the next difficulty negative. If there are so many the next Diff is negative, someone with very high hash power like > 60% of the total is trying to break the algo with forward timestamps or it is after an attack that forced a really low diff with forward timestamps and now honest miners are trying to get back on time. So limit increase in D to something like 15% with D=1.15*avg(D) if avg(solve times) < 0 because right before it goes negative, it is trying to get a really high D by something like avg(D) / 0.01. My suggestion for the limit on the "per bock" increase and decrease:
    • max change up or down in diff/block = M^(1/N) where M is max expected hashrate change up or down. The limit can be a lot higher or lower than this. It's really just meant to prevent a timestamp manipulationcatastrophe as decribed below, not to improve performance.
    • the classic "timewarp" attack as described in a bitcointalk post is not possible for coins with rolling averages.
    • you may see discussions of allowing median instead of averaging or using a combination. I suggested it but now I think only average should be used. I previously thought median was needed to protect against "timewarp" attack, but now I do not think that is correct.
    • Detailed explanation of why limits on D increase are needed: If timestamps are accurate and 50x hash power begins, the per-block increase in D is only 50^(1/N) = 1.14 = 14% increase per block for N=30. So a limit in the increase of D of 50^(1/N) is a security precaution: if you have N=9 and someone with 10x hashpower comes in and wants to break the coin, then he can get 9 blocks in a row and assign timestamps 1 second apart. On a targetinterval of 600 D would go 600x higher after the 9th block. On his first block, before he begins the 9, he would assign a large forward time so that after his 8th block of adding only 1 second per block, difficulty has risen only something like 200% so he will still be able to get the 9th block. Then after the 9th block it will take his 10x hashpower 600 x 2 / 10 = 120 target block times to get a 10th block. If he stops, then it's 1200 block intervals (8 days) before someone gets the next one and because of the limits on timestamps (or a limit on a D decrease), it could take almost 8 x N days to get the next N blocks. Or he could have added 0 seconds to each block and then 1 second to the 9th one so that a <= 0 line of code is not triggered and this would cause 1200 x 9 delay until the next block is found.
    • I believe removing outliers in solve times is not good. The +/- 6x limit on assigning timestamps does this, but it is looking a the values rather than blindly removing the 6 highest and lowest.
    • I have previously argued for median and no limits on the rise and fall of D. This supersedes those errors.
    • Another possibility that works pretty good for N>~15 is to use a square: next D = avg D x (Target time / avg Solve time)^2 This makes D more responsive, but volatile. I think of it as "a hornet's nest" especially with low N. Like hornets, it can sacrifice itself, resulting in D that is 2x more (or 50%) less than if you had used larger N and no square. But the averages come out OK and anyone attacking it with high hash rate will get stung. It really may not be a lot different than just going to a lower N, like nearly as low as N/2.
    • There is an oscillation in averaging methods which indicates there might be something better, but after a lot of work, I can't find anything.
    Addendum: Zawy v2.0
    All the above can be called Zawy v1.0. Adding the following can called Zawy v2.0
    Achieving both goals 1 and 2
    Achieving goal 2, protection against hash attack, always reduces goal 3 and 6, protection again timestamp manipulation
    (edit: see later posts for an implementation of this)
    To make diff smooth but very responsive (solving both goals 1 and 2), here's an advanced method: use N=60 or something, but keep checking the most recent N=12 and if it changes so much that it only had a 1% chance of changing that much 1% of the time (there were <=4 or >=20 blocks when 12 were expected), then you switch to N=12 and keep N=12 for 12 blocks to let any outliers fall out of the average, then let N slowly rise back to N=60. This means that if 4 occur, then diff drops from 100 to 33 in 1 block. If 20 occur then it rises to 100 x 20/12 = 166 in 1 block. This means the M^(1/N) limit needs to be changed to all 1.66x on the rise and 0.33x on the fall. A drop to 1/3 in 1 block is frightening and it will occur on accident, but the statistics is good. Changing it to 1/1.66 = 0.60x seems reasonable and violates symmetry in the statistics for only 1 block.
    Solving problem 6
    Addressing goals 3 and 6 always conflicts with goal 2. I will try to address as good as I can in Zawy v2, but the priority is goal 1 and 2.
    If they come online with 3x hashrate (75% of new total) and assign the 6x timestamp limit always, the difficulty will keep dropping. At 50%, the difficulty will be correct. But anything above 50% will keep dropping. But if you allow +5x and -20x timestamps then they will need over 4x hashrate to make the difficulty keeping dropping. If they assign -20x all the time they can make the difficulty keep rising to I believe 20x their hashrate, if they want to waste that much computer power to slow your coin release (they have to stick around for as much as 20xN). My calculation indicates 5x instead of 6x with -20x multiplier means blocks will be issued 4% faster (5x) instead of 1.7% too fast (6x) due to the lack of symmetry. Sometimes they're should be >5x and < -5x but with the >5x block, the <-5x takes some advantage and makes the difficulty a little bit too low.

No comments:

Post a Comment