Monday, July 10, 2017

Doing better than the simple average in cryptocoin difficulty algorithms

I am still trying to find a better method than the simple avg, but I have not found one yet. I am pretty sure there is one because estimates of hashrate based on avg(D1/T2 + D2/T2 + ....) should be better than avg(D)/avg(T) if there is any change in the hashrate during the averaging period. This is because avg(D)/avg(T) throws out details that exist in the data measuring hashrate. We are not exactly interested in avg(D) or avg(T). We are interested in avg(D/T). The avg(D/T) method does not throw out details. Statistical measures throw out details. You don't want to lose the details until the variable of interest has been directly measured. I learned this the hard way on an engineering project. But avg(D/T) does not hardly work at all in this case. The problem is that the probability distribution of each data point D/T needs to be symmetrical on each side of the mean (above and below it). I'm trying to "map" the measured D/T values based on their probability of occurrence so that they become symmetrical, then take the average, then un-map the average to get the correct avg(D/T). I've had some success, but it's not as good as the average. This is because I can't seem to map it correctly. If I could do it, then another improvement becomes possible: the least squares method of linear curve fitting could be used on the mapped D/T values to predict where the next data point should be. All this might result in a 20% improvement over the basic average. Going further, sudden on and off hashing will not be detected very well by least squares. Least squares could be the default method, but it could switch to a step-function curve-fit if a step-change is detected. I just wanted to say where I'm at and give an idea to those who might be able to go further than I've been able to.

No comments:

Post a Comment