Tuesday, November 21, 2017

Best difficulty algorithm

I'm changing this as I find improvements. Last change: 12/1/2017.

Newest information is here:
https://github.com/zawy12/difficulty-algorithms/issues/3


This first one appears to be the best all around:
# Jacob Eliosoff's EMA (exponential moving average)
# https://en.wikipedia.org/wiki/Moving_average#Application_to_measuring_computer_performance
# ST = solvetime, T=target solvetime
# height = most recently solved block
# N=70
# MTP should not be used

for (i=height - 10 to height) {  # check timestamps starting 10 blocks into past)
   previous_max = max
   max=timestamp[i] if timestamp[i] > max
}
ST = max - previous_max
ST = max(T/100,min(T*10, ST))
next_D = previous_D * ( T/ST + e^(-ST/T/N) * (1-T/ST) )
A word of caution with the above EMA: when converting to and from bits field (aka target) to difficulty, it is important to not make a consistently wrong "rounding" error. For example, if previous difficulty was 100,000, it is important that nothing in the code make it consistently +50 more or consistently -50 less (0.05% error). That would cause the EMA at N=70 to have 3.5% error in solvetime. At 0.5% error per block, there is 35% error in the solvetimes (difficulty is 30% too high or low). The error that develops seems to be based on about 1.005^70 = 41%. If half the time it is +1,000 too high and the other half -1,000 too low, then it's OK. just don't be consistently wrong in the same direction. Error in the value for e=2.7183 does not hurt it.

In an simple average (SMA), a multiplicative error like this only causes a proportional error in solvetime, not a compounded error.

There is a T/solvetime ratio in two places. It must be the same in both places. I don't know how it would be coded to give something different, but it's something to be aware of.

==========
The following attempts to make it smoother while having some ability to respond to faster to big hashrate changes.
# Dynamic EMA difficulty algo (Jacob Eliosoff's EMA and Zawy's adjustable window). 
# Bitcoin cash dev(?) came up with the median of three to reduce timestamp errors.
# For EMA origins see 
# https://en.wikipedia.org/wiki/Moving_average#Application_to_measuring_computer_performance
# "Dynamic" means it triggers to a faster-responding value for N if a substantial change in hashrate
# is detected. It increases from that event back to Nmax

Nmax=100 # max EMA window
Nmin=25 # min EMA window
A=10, B=2, C=0.37  # A,B,C = 10,2,37 or 20, 1.65 0.45, 

# TS=timestamp, T=target solvetime, i.e. 600 seconds
# Find the most recent unusual 20-block event
for (i=height-Nmax to height) {  # height=current block index
    if ( (median(TS[i],TS[i-1],TS[i-2]) - median(TS[i-20],TS[i-21],TS[i-22]))/T/A  > B 
           or  
         (median(TS[i],TS[i-1],TS[i-2]) - median(TS[i-20],TS[i-21],TS[i-22]))/T/A  < C  ) 
      {   unusual_event=height - i + Nmin   } 
}
N = min(Nmax, unusual_event))

# now do the EMA shown above with this N 

Here's another algorithm that seems to be about as good as the EMA
#  Weighted-Weighted Harmonic Mean (WWHM) difficulty algorithm
# Original idea from Tom Harding (Deger8) called "WT-144"  
# No limits, filtering, or tempering should be attempted.
# MTP should not be used.

# set constants
# N=60 # See Masari coin for live data with N=60
# T=600 # coin's Target solvetime. If this changes, nothing else needs to be changed.
# adjust=0.99 # 0.98 for N=30, 0.99 for N=60
#  k = (N+1)/2 *adjust * T # there is not a missing N. 
# height = most recently solved block

# algorithm
d=0, t=0, j=0
previous_max=timestamp[height - N] 
for ( i = height-N+1; i < height+1; i++) {  # (N most recent blocks)
    max_timestamp=max(timestamp[i], previous_max)
    solvetime = max_timestamp - previous_max
    solvetime=1 if solvetime < 1
   # for N=60, 10*T solvetimes drives difficulty too far down, so:
    solvetime = 10*T if solvetime > 10*T 
    previous_max=max_timestamp
    j++
    t +=  solvetime * j 
    d += D[i] # sum the difficulties this is how WWHM is different from Tom's WT.
}
t=T*N if t < T*N  # in case of startup weirdness, keep t reasonable
next_D = d * k / t 

# limits on next_D do not need to be used because of the above solvetime restrictions
# but if you still want limits on D's change per block & expect max 5x hash rate
# changes or want to replace the solvetime restrictions, use
# limit = 5^(3/N)
# next_D = previous_D*limit if next_D > previous_D*limit
# next_D = previous_D/limit if next_D > previous_D/limit


Monday, November 13, 2017

Maximizing options as the basis of memory-less intelligence

There seems to be some big news in A.I. and cosmology. To give an example of how far-reaching this idea is, view walking upright and the ability to use our hands as something that maximizes our options rather than something that gives us more power in any other sense. This simple algorithm can successfully play games without any training at all, other than defining "maximize future options" for a given set of rules.

I am far from certain his general view is correct (or even expressed clearly enough to criticize. I still can't view it as anymore than a method of problem solving when his claims are much greater than that, but I can't exactly understand his claims. It sounds like he is saying things appear intelligent because nature somehow keeps options open.


Ted Talk
https://www.ted.com/talks/alex_wissner_gross_a_new_equation_for_intelligence

General idea:
https://physics.aps.org/articles/v6/46

How it's good at playing atari without training:
http://entropicai.blogspot.com/2017/06/solved-atari-games.html#more

On freedom in society making us more powerful:
https://www.edge.org/response-detail/26181

Basic physics of causal entropy and intelligence
http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf

How it can predict the cosmological constant by following the anthropic principle:
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-12353.pdf