Wednesday, June 21, 2017

Zawy v2 difficulty algorithm (dynamic averaging window)



========

if ( this_timestamp - last_timestamp > 6*TargetInterval ) {

this_timestamp = 6*TargetInterval + last_timestamp;

}

if ( this_timestamp - last_timestamp < -6*TargetInterval ) {

this_timestamp = -6*TargetInterval +last_timestamp;

}

# next line prevents artificially low timestamp from affecting it.

if ( current minus last timestamp is not negative AND N >=19) {

# if we just saw 19 blocks when we expected 12 go to N=19

# will trigger on average once per 50 blocks on accident

if ( average(last 19 solvetimes) < TargetInterval/1.66 ) { N=19; wait=N; i=0; }

}

# next line prevents 8th solvetime from being artificially large

if ( none of past 7 solvetimes were negative AND N>=6) {

# If we just saw 6 blocks when we expected 12, go to N=6.

if ( average(last 6 solvetimes) > TargetInterval/0.50 ) { N=6; wait=N; i=0;}

}

}

# If we saw 5 blocks when we expected 1, go to N=5. This needs to be last.

# Will trigger about 1 per 250 blocks on accident. Detects >2x hash rate quickly.

# It is one-sided (for rise but not fall in hashrate) it may slow coin release rate a little.

if ( none of past 5 timestamps are negative) {

if ( sum(last 5 solvetimes) / TargetInterval < 1 ) { N=5; wait=N; i=0; }

}

# give outliers a chance to get out of the new avg range assigned above before letting

# N increase but it did nto seem to have large effect. Debating it.

if ( wait > 0 ) { wait=wait-1; }

else { N=N+1; }







Next_D= avg(past N Ds) * TargetTime / avg(past N solvetimes);




# Testing indicated allowing more than 1+2.5/Nstart increases and less than

# 1- 2.5/Nstart decreases did not help, where N=minimum. For Nstart=5

# these are 1.5 and 0.50 limits.




if ( Next_D > 1+2.5/Nstart * avg(past N Ds) ) { Next_D=1+2.5/Nstart*avg(past N Ds)

if ( Next_D < 1-2.5/Nstart* avg(past N Ds) ) { Next_D=1-2.5/Nstart*avg(past N Ds)




==============

I have what I think could be an excellent algorithm for all coins that would be especially beneficial to small coins, but it's a little difficult to understand. I would like sumokoin to eventually employ it so that monero and cryptonote can believe in it. They are currently showing an interest [here](https://github.com/monero-project/research-lab/issues/3#issuecomment-309427606) and [here](https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a#commitcomment-22628026) but they will lose interest if we are not able to demonstrate something better on a live coin.




I want to demonstrate it in a spreadsheet today or tomorrow that it is noticeably superior, at least when timestamps are accurate. Then I want to harden it against timestamp errors as in the previous pseudocode. The long post at cryptonote above describing "Zawy v1.0" is my recommendation until this continuous version is finished.




In general, I want it to check for sudden hashrate increases or decreases and switch to the correct N if it detects an unlikely event. I want it to do it continuously, minimizing any constants that people select. Actually, people should not select any constants in cryptocoins. For example, total coin and coin release rate should not be set by devs, but by the marketplace. Likewise, the following will let the "market" of "network hashrate" "bid" for the correct N averaging time.




```

# This demonstrates the idea. I tested it last year.

# This is to be substituted in place of the 3 conditionals in previous psuedocode.

# I'll employ it in a spreadsheet to prove it it's better than simple averaging.

# It will not work well until it is hardened against timestamp errors.

# The previous pseudocode is hardened against timestamp errors and

# shows generally how the following needs to be changed




Nstart = 5 =minimal number of blocks we will check for statistical event

Nend = 36 = max number of blocks we will check for statistical event




# Go to N averaging = Nstart=5 only ~1% of the time by chance alone

STDEVstart = 4;

# Go to N averaging = Nend=36 32% of the time by chance alone

STDEVend = 1




# Now create a function using the above that will determine the amount of

# statistical significance we require before we switch to an N averaging that

# is between Nstart and Nend.

# I'll Substitute the above assigned values for clarity.

function STDev(NA) = 4 - (4-1)/(36-5)*(NA-5)




N=current number of blocks used in averaging, determined in previous code




# Test all past-block-ranges for a statistical event, from 5 to N

for NA=-Nstart to -N

NE= N_Expected_blocks_in_NAs_time= sum(NA previous solvetimes) / TargetInterval

S = STDev(NA)

NH = an NA above this should not have occurred in NE time within bound of STDev

NL = an NA below this should not have occurred in NE time within bound of STDev

NH = NE + S*SQRT(NE)

NL = NE - S*SQRT(NE) +1 # the +1 was needed in testing to make it symmetrical.

if ( NA > NH or NA < NL) {

# throw out earliest blocks in case they were before the attack or outliers. The +2

# prevents thowing out 2 out of 5. +3 might be better.

N=int(2*NA/3+2)

exit for loop, last NA;

}

}

```

Here are the results for 10x hashrate attacks. "Blue" areas indicate blocks were obtained at a low difficulty. Black area that is not on top of "blue" are blocks obtained at a costly difficulty. Everything is correct when black is on top of blue.


The "Zawy v2" algorithm is almost exactly like the theory-based pseudo-code which shows it is based on good theory. N=30 is not shown because the thin high-hash rates are only 15 blocks wide and N=30 does not give good results on N=15 attacks. Before the attacks, you can see a gentle rise to 2x hashrate and a drop down to 1/2 hashrate. Default hashrate and difficulty = 1, so the scale is accurate.


**edit:**


- with zawy v1 N=18 with these hash attacks, 10% of blocks were gained at < 1/2 the appropriate difficulty (attacker) and 10% suffered >2x the appropriate difficulty (constant miners).
- with zawy v2 it was 4% and 7%
- since the v2 dynamic averaging period went as low as 5 and there were 6 attacks in 970 blocks, 6x5/970 = 3% makes sense, as well as 6x18/970 = 11% for v1. The 7% is a little high because there is a 1440 limit on the timestamp that prevents long solve times for affecting it. (it's to prevent timestamp manipulation from forcing difficulty low)





No comments:

Post a Comment