Wednesday, July 19, 2017

A P2P cryptocurrency to replace FB, Amazon, Fiat, and Bitcoin.

Posted to HUSH slack. A prelude to this

Here's an idea for a cryptocoin to build upon the timestamp idea I posted a few days ago (again, that does not necessarily use the stars).

People get more coin by having more "friends" (actually, people you know to be distinct individuals). It might be a slightly exponential function to discourage multiple identities. Your individual coin value is worth more to your "local" friends than to "distant" friends. The distance is shorter if you have a larger number of parallel connections through unique routes. A coin between A and D when they are connected through friends like A->B->C->D and A->E->F->D is worth more than if the E in the 2nd route is B or C. But if E is not there (A->F->D) then the distance is shorter. More coin is generated as the network grows. Each transaction is recorded, stored, timestamped, and signed by you and your friends and maybe your friends' friends. Maybe they are the only ones who can see it unencrypted or your get the choice of a privacy level. Higher privacy requirement means people who do not actually know you will trust your coin less. Maybe password recovery and "2-factor" security can be implemented by closest friends. Each transaction has description of item bought/sold so that the network can be searched for product. There is also a review and rating field for both buyer and seller. For every positive review, you must have 1 negative review: you can't give everyone 5 stars like on ebay and high ranking reviewers on Amazon (positive reviewers get better ranking based on people liking them more than it being an honest review). This is a P2P trust system, but there must be a way to do it so that it is not easy tricked, which is the usual complaint and there is a privacy issue. But look at the benefits. Truly P2P. Since it does not use a single blockchain it is infinitely faster and infinitely more secure than the bitcoin blockchain. I know nothing about programming a blockchain, let alone understand it if I created a clone. But I could program this. And if I can program it, then it is secure and definitive enough to be hard-coded by someone more clever and need changing only fast as the underlying crypto standards (about once per 2 decades?)

Obviously the intent is to replace fiat, amazon, and ebay, but it should also replace FB. A transaction could be a payment you make to friends if you want them to look at a photo. The photo would be part of the transaction data. Since only you and your friends store the data, there are no transaction fees other than the cost of your computing devices. Your friends have to like it in order for you to get your money back. LOL, right? But it's definitely needed. We need to step back and be able to generalize the concept of reviews, likes, votes, and products into the concept of a coin. You have a limited amount dictated by the size of the network. The network of friends decides how much you get. They decide if you should get more or less relative power than other friends.

It would not require trust in the way you're thinking. Your reputation via the history of transactions would enable people to trust you. It's like a brand name, another reason for having only 1 identity. Encouraging 1 identity is key to prevent people from creating false identities with a bot in order to get more coin. The trick and difficulty is in preventing false identities in a way that scams the community.

Everyone should have a motivation to link to only real, known friends. That's the trick anf difficulty. I'm using "friend" very loosely. It just needs to be a known person. Like me and you could link to David Mercer and Zookoo, but we can't vouch for each other. That's because David and Zookoo have built up more real social credibility through many years and good work. They have sacrificed some privacy in order to get it. Satoshi could get real enormous credibility through various provable verifications and not even give up privacy, so it's not a given that privacy must be sacrificed. It should be made, if possible, to not give an advantage to people because they are taking a risk in their personal safety.

The system should enable individuals to be safer, stronger, etc while at the same time advancing those who advance the system. So those who help others the most are helped by others the most. "Virtuous feedback". This is evolution, except it should not be forgotten that "help others the most" means "help 2 others who have 4 times the wealth to pay you instead of 4 others with nominal wealth". So it's not necessarily charitably socialistic like people often want for potential very good reasons, but potentially brutally capitalistic, like evolution.

It does not have to be social network, but it does seem likable social people would immediately get more wealth. It's a transaction + reputation + existence network. Your coin quantity is based on reviews others give you for past transactions (social or financial) plus the mere fact that you were able to engage in economic or social activity with others (a measure of the probability of your existence). There have been coins based on trust networks but I have not looked into them. It's just the only way I can think of to solve the big issues. If the algorithm can be done in a simple way, then it's evidence to me that it is the correct way to go. Coins give legal control of other people's time and assets. If you and I are not popular in at least a business sense where people give real money instead of "smiles" and "likes" like your brother, why should society relinquish coin (control) to us? The "smiles" might be in a different category than the coin. I mean you may not be able to buy and sell likes like coin. Likes might need to be like "votes". You would get so many "likes" per day to "vote" on your friends, rather than my previous description of people needing to be "liked" in order to give likes, which is just a constant quantity coin. Or maybe both likes and coin could be both: everyone gets so many likes and coins per day, but they are also able to buy/sell/accumulate them. I have not searched for and thought through a theoretical foundation for determining which of these options is the best. Another idea is that every one would issue their own coin via promises. This is how most money is created. Coin implies a tangible asset with inherent value. But paper currency is usually a debt instrument. "I will buy X from you with a promise to pay you back with Y." Y is a standard measure of value like the 1 hour of laborer's time plus a basket of commodities. Government issues fiat with the promise it buys you the time and effort of its taxpayers because it demands taxes to be paid in that fiat. This is called modern monetary theory.

So China sells us stuff for dollars, and those dollars gives china control of U.S. taxpayers, provided our government keeps its implicit promise to not inflate the fiat to an unexpectedly low value too quickly, which would be a default on its debt. So your "financially popular" existence that is proven by past transactions of fulfilling your debt promises gives you the ability to make larger and larger debt promises. How or if social likes/votes should interact with that I do not yet know. But I believe it should be like democratic capitalism. The sole purpose of votes is to prevent the concentration of wealth, distributing power more evenly. This makes commodity prices lower and gives more mouths to feed, and that enabled big armies, so it overthrew kings, lords, and religions. Then machines enabled a small educated Europe and then U.S. population to gain control of the world.

Saturday, July 15, 2017

Best difficulty algorithm: Zawy v1b

# Zawy v1b difficulty algorithm 
# Based on next_diff=average(prev N diff) * TargetInterval / average(prev N solvetimes)
# Thanks to Karbowanec and Sumokoin for supporting, refining, testing, discussing, and using.
# Dinastycoin may be 3rd coin to use it, seeking protection that Cryptonote algo was not providing.
# Original impetus and discussion was at Zcash's modification of Digishield v3. The median method
# Zcash uses should be less accurate and should not be needed for timestamp error protection.
# Wider allowable limit for difficulty per block change provides more protection for small coins.
# Miners should be encouraged to keep accurate timestamps to help negate the effect of attacks.
# Large timestamp limits allows quick return after hash attack. Needed to prevent timestamp manipulation.
# (1+0.693/N) keeps the avg solve time at TargetInterval.
# Low N has better response to short attacks, but wider variation in solvetimes. 
# Sudden large 5x on-off hashrate changes with N=11 sometimes has 30x delays verses 
# 20x delays with N=17. But N=11 may lose only 20 bks in 5 attacks verse 30 w/ N=17.
# For more info: 
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
#
# D = difficulty, T = TargetInterval, TS = timestamp, TSL = timestamp limit

N=17;  # can possibly range from N=4 to N>30.  N=17 seems to be a good idea.
TSL=10 if N>10 else TSL = N; # stops miner w/ 50% from lowering  D>25% w/ forward TS's.
current_TS=previous_TS + TSL*T if current_TS > previous_TS + TSL*T;
current_TS=previous_TS - (TSL-1)*T if current_TS < previous_TS - (TSL-1)*T;
next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs] / (1+0.693/N);
next_D = previous_D*1.2 if next_D < 0; 
next_D = 2*previous_D  if next_D/previous_D > 2;
next_D = 0.5*previous_D  if next_D/previous_D < 0.5;

Monday, July 10, 2017

Doing better than the simple average in cryptocoin difficulty algorithms

I am still trying to find a better method than the simple avg, but I have not found one yet. I am pretty sure there is one because estimates of hashrate based on avg(D1/T2 + D2/T2 + ....) should be better than avg(D)/avg(T) if there is any change in the hashrate during the averaging period. This is because avg(D)/avg(T) throws out details that exist in the data measuring hashrate. We are not exactly interested in avg(D) or avg(T). We are interested in avg(D/T). The avg(D/T) method does not throw out details. Statistical measures throw out details. You don't want to lose the details until the variable of interest has been directly measured. I learned this the hard way on an engineering project. But avg(D/T) does not hardly work at all in this case. The problem is that the probability distribution of each data point D/T needs to be symmetrical on each side of the mean (above and below it). I'm trying to "map" the measured D/T values based on their probability of occurrence so that they become symmetrical, then take the average, then un-map the average to get the correct avg(D/T). I've had some success, but it's not as good as the average. This is because I can't seem to map it correctly. If I could do it, then another improvement becomes possible: the least squares method of linear curve fitting could be used on the mapped D/T values to predict where the next data point should be. All this might result in a 20% improvement over the basic average. Going further, sudden on and off hashing will not be detected very well by least squares. Least squares could be the default method, but it could switch to a step-function curve-fit if a step-change is detected. I just wanted to say where I'm at and give an idea to those who might be able to go further than I've been able to.

Numenta's CLA needs 6 layers to model objects

posted to numenta forum
====
Back when there were only 2 white papers and a few videos I became interested in the HTM and saw a video of a 2D helicopter being detected and wondered about the relation between the layers they used and the ability to recognize objects. I remembered 6 equations with 6 unknowns (the degrees of freedom) are required to solve the dynamics of 3D rotation and translation. The layers of the helicopter HTM matched with what it was able to detect if they were unknowingly being used in a subtle 2-equations and 2 unknowns methodology. Of course this begs the question "Are the 6 layers in the cortex required to see the 3D world?" Numenta's view of the cortical column implies that the 6 layers have nothing to do with this but I would like to question that view. Jeff has also warned against pursuing the reverse black hole question no one has ever escaped: "Is the 3D world the result of a 6-layered brain?" But an understanding of the relation between mass and space-time prevents me from abandoning the reverse question. More importantly, physics has an elephant in the room that is rarely acknowledged and questioned: the only integers that appear in physics are the result of 3D spacetime and Feynman states no fundamental aspect of QED requires an extension beyond 1D. QED is sort of the core of all physics except for gravity and nuclear stuff. An expert in the area informed me that spin is what creates 3D space, so my line of questioning is suspect. But my view is that we may have invented spin to maintain the view that objects are independent of our perceptions. I admit I am immediately deep in a recursive black hole: the 6 layers is a mass of neurons that I'm proposing we can see only because we have the 6 layers. BTW, if we had 10 layers to support the perception of 4D objects in 4D space then I believe all velocities would be static positions and all accelerations would be velocities. instead of E + mc^2 = 0 we would have E+mc^3=0 (now really getting side-tracked on the physics: by keeping relativity units correct there is a missing negative in some equations. Another example is F+ma=0 where the "F" is more correctly defined as the reactive force of the object which is in the opposite direction of the "a". This comes from meters=i*c*seconds which comes from Einstein's "Relativity" appendix 2 which he stated allows use of Euclidean instead of Minkowski space-time which is in keeping with the Occam's razor requirement.)

What I'm suggesting is falsifiable. Others posting here will know if it takes 6 layers to fully recognized objects in 4D space time. The degrees of freedom is N translational plus N(N-1)/2 rotational. I tried testing the theory via observation and thought of ants. It seems to be supported there: their eyes that need to detect only 2D "shadows and light" without rotation have roughly two layers. And yet their feelers and front legs, having to deal with 3D objects in 3D space, have 6 layers. There's a great extension to this observation: wasps are the closest cousins to the ants and have 6 layers for their eyes.

I posted this question nearly a decade ago in the old forum, but I'll ask again. Is a 6 layer HTM required for fully characterizing 3D objects in 4D space-time?
=====
I think a single layer would require a lot more new training on every object. For example, it sees a circle moving about and learns its behavior. Then it turns sideways and turns out to be a cylinder, and then it starts rotating, so training has to start over. I don't think it could conceive very well "this is the same object" and/or generalize the lessons learned on past objects to future objects. It just seems like it would have difficulty understanding objects like we do. I believe 6 layers would be able to perceive the laws of dynamics but 1 layer would not. These six layers are not an HTM but the foundation of a single cortical column. Each CLA layer of the HTM would require the 6 layers. So the CLA would need to be redone if you want it to think like mammals and see like wasps. The motor control of layer (5th layer of cortex) may serve may also serve part of this "inherent object modelling", not just motor control. The motor control part might be crucial to developing the concept of inertia (mass). Mass is another variable ("dimension") which implies 7 layers should be present. To get out of that mathematical corner, I have to conjecture mass is something special in the modelling like "the higher dimensions that 6 layers can't model and that have permanence".

I do not mean to say that 6 layers is necessarily inherently needed in A.I. to be superior to humans even in the realm of understanding physics, but that it is needed to think more directly like animals. But if 6 layers per HTM layer is actaully needed for a higher intelligence, then 10 layers to do 4D space should be even more powerful. 15 layers are needed for 5D. I do not accept the conjecture that objective reality, if there is one, depends on a specific integer of spatial dimensions like "3".

The visual cortex by itself with its 6 layers does not seem to have any concept of objects, but I think the 6 layers are still needed for encoding the information so that the concept of the objects is still extractable by the higher levels in the "HTM" of the brain (e.g. frontal lobes). But the concept of an object seems to be possible in the 6 layers just "behind" the eyes of flying insects: wasps certainly have a better concept of the object nature of people than ants, judging by the way they identify and attack. Ants are virtually blind to what people are, except for detecting skin and biting.

Saturday, July 8, 2017

Stars as cryptocoin oracles: posts to HUSH cryptocoin slack

Note: ethereum time syncs with pool.ntp.org:123. Nodes (mining or not) must have an accurate time to sync with network. Miners need accurate time so later blocks will build upon theirs. But there is no distinct rule on timestamps in ETH except that it must be after previous timestamp.

pools with >51% can get all the coins they want from small alt coins in a few hours, dropping the difficulty at the rate of next D = previous avg D x [1/(1+M/N)]^(2X-1) where X is percent of hash power, N is the number of blocks in the rolling average, and M is the coin's limit on how far the timestamp can be forwarded. If GPS isn't good enough, the only solution I can think of is to tie miners and/or nodes to the stars with an app on their smartphone to get a periodic observation of the stars to calibrate their clock. But then it begs the question (via the BTC white paper) of why mining would still be needed.
===
I think the point of mining was to solve the double-spending problem without relying on a 3rd-party timestamp. Satoshi seems to say this explicitly in the whitepaper. It also finances the growth of the network in a way that supports transactions, but I do not understand why non-mining nodes seem to be necessary to keep miners in check and/or why mining often has the feel of a necessary evil, if the entire point of financing mining was to build a working network. With a valid clock on each peer, the double spending problem seems solved without mining. It leaves the question of how to release the coins in a way that supports the network. But if the timestamp problem is solved by each peer using the stars as his clock, is there any need for a behemoth network using might is right to determine the time and thereby coin emission rate? It might be that peers with valid clocks who only want a wallet and to conduct transactions could be all that is needed reaching the ideal of not having any centralized miners or developers and absolutely evenly distributed among everyone. There might be a way to distribute the blockchain so that they do not all need the entire chain. It would have a statistical chance of forking (fracturing with all forks being valid but increasingly incompatible) which could be increased by hacking, but that would only result as the need for the network grew (via more marketplace transactions). So the fracturing might be beneficial by keeping the ideal of constant value. That is a requirement of all good currencies: constant quantity is the ideal asset, not currency. Constant quantity was always a disaster for all currencies that have ever been used because it's a bonanza for the 1% such as us, the early adopters seeking to profit without working for it, extracting wealth from late-adopters. In any event it would get rid of centralized developers and centralized mining. It might be as simple as PGP so that a requirement for a transaction to be valid is that the code never changes. Or maybe any code on any machine would be valid as long as other peers confirm your outputs are valid for your inputs as specified by a non-changing protocol.
===
by "fracturing" I introduced vagueness to mean "something that is probably not unlike forking". I am speaking of big picture ideas as I have no knowledge of BTC details. I took a strong renewed interest in difficulty algorithms after two cryptonote coins adopted my difficulty algorithm (block averaging instead of medians for 17 blocks with appropriate timestamp limits) to gain protection against attacks. Cryptonote insanely is (or was) using 300 blocks as the averaging window so sumokoin and karbowanek had to fork and start using mine. Zcash changed their digishield v3 as a result of my pestering but did not follow me exactly like these other coins. I posted too much and made a big mistake. I'm side-tracked: an unavoidable problem in the difficulty algorithm lead me back to the Satoshi white paper and the idea that scientific observation of stars could be the beginning of "real" cryptocurrencies as it was for physics. The stars would be the first valid, provable, non-3rd party oracle in cryptocoins.
====
With only +/-2 degree accuracy I figure 10 minute blocks are OK. 2 degrees is 4 minutes if you look at stars 90 degrees to the north star. So local peers have to agree on the time +/4 minutes with 1 minute to spare on each end. Russia also has a GPS system but I don't think the combination of the two solves anything.
===
You are saying I'm missing the "might is right" aspect. But the idea is that it replaces "might is right" with an objective verifiable truth that can be checked by any and all peers at all present and future times.
====
I think everyone could reject the transaction if it does not have the correct timestamp. He can lie about it, but it will be rejected. He can send the same coin twice in the same 8 minute window, but everyone is supposed to rejected both sends. I previously mentioned maybe all the peers do not need a full chain, but that's probably a pretty wrong-headed idea.
=====
Having 1 miner timestamp a block is a lot more important than having the correct time. But if a correct time is agreed upon, then every peer everywhere receives and validates every transaction independently. Because of the inaccuracy of the timestamps, the timestamps are rounded to the nearest minute that has 0 as the right hand digit, and you have +/- 2 minutes from the next "5" minute to send a transaction. But I must be missing something. It seems like using star gazing, GPS, or timestamp servers is not necessary: you would just need to make sure your peer's computing device has approximately the correct system time for global time.
===
I gave solution that doesn't even need an app that calibrates with the stars: if everyone manually makes sure their clock is +/- 2 minutes correct, and if transactions can propagate to everyone in 2 minutes, then let's say the blockchain is updated every minute that ends in "0". The blockchain would be updated by EVERYONE. There are no nodes or miners needed or wanted in this design, especially since we want it nuclear bomb proof, unlike the current bitcoin with concentrated miners and devs. Everyone would send out their transactions with their timestamp at minutes ending in "4", so with error, they may be sending them out right after "2" up until "6". If there is a 0 to 2 minute propagation delay, everyone's going to receive each other's transactions between "2" and "8" by their own clock (between 4 and 6 by "star time" or by whatever clock each peer has decided by himself to trust..it must not be coded into the client as a default unless it is watching the stars). On minute 8, every client closes his ears to every transaction. So nothing is happening on any client anywhere between 8 and 2 except verifying and adding transactions to the chain, which should work even if their clock is in error by +/- 2 minutes. Clients with a -2 minute error clock and those with a +2 minute error clock should see the exact same set of transactions, or someone is giving a mixed message to clients on accident or on purpose by going outside it's own allowed window. On accident would mean some transactions were missed on some clients. On purpose would be someone trying to spend on -2 minute clients the same coin he is also trying to spend on an +2 minute client. In both cases, it seems like clients could check each other and decide to throw both erring transactions out. So that's my proposal. If it's possible to implement, then as far as I know it's only 1 of 3 known ways. The first is a traditional database that has a single reference location for its core data so there are no "double conflicting updates" on the same record. In the case of more than 1 core location and backups, I believe they have advanced methods of checking for conflicts and then undoing "transactions" in order to correct the problem. The 2nd is Satoshi's method.

Wednesday, June 28, 2017

Zawy v2 difficulty algorithm

#!usr/bin/pseudo_perl
#
# Zawy v1b abd v2 difficulty algorithm
# Simple averaging window with option to use dynamic window size.
# Cite as "Zawy v1b N=8" if N=8 is chosen and "Zawy v2 N>=8" if dynamic option is chosen
# Credit karbowanec and sumokoin for using modifications of Zawy v1 after their hard forks 
# to protect against attacks that were the result of Cryptonote's default difficulty algorithm. 
# And for motivating me to do more work where Zcash left off.  
#
# Core code with fluff and dynamic option removed:  (TS=timestamps)
# TSL=10 if N>11 else TSL = N-2; # stops miner w/ 50% forward stamps from lowering  D>20%.
#  current_TS=previous_TS + TSL*T if current_TS > previous_TS + TSL*T;
#  current_TS=previous_TS - (TSL-1)*T if current_TS < previous_TS - (TSL-1)*T;
#  next_D = sum(last N Ds) *T / [max(last N TSs) - min(last N TSs] / (1+ln(2)/N)
# next_D = 2*previous_D  if next_D/previous_D > 2;
# next_D = 0.5*previous_D  if next_D/previous_D < 0.5;
#
# Changes:
# A preference for low N and letting difficulty change rapidly for hash-attack protection. 
# Included option for dynamic averaging window (probably inferior to simple low N).
# Includes timestamp limit for timestamp manipulation/error protection. 
# Added an adjustment factor to next_D that is important at low N: 1/(1+ln(2)/N). 
# This is due to the median of a Poisson being ln(2) of the mean.
# A rejection of medians which do not help and cause error via lack of resolution, 
#  including the "bad timestamp" excuse for using it. 
# Rejected dynamic modification to maxInc, maxDec, and TS limits based on recent history
# of D (as a way to let D drop after an attack). It either leaves a security hole or does 
# not have an effect.  Avg(solvetime) is still > TargetInterval if there is a hash attack but 
# I can't find a solution that does not have equally bad side effects.  
#
# Miners/pools should be asked to keep their timestamps accurate or it will help 
# attackers and block release will be slower.
# See verbose text at link below for explanations (if this is not verbose enough)
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a#commitcomment-22615466

# Begin setting constants for this coin

T = 240;   # Target interval
MinimumHashAttackDuration = 8; # Sets N. See text below.
timestamps_are_provably_correct = "no";  #  "no" if miners or pools are assigning timestamps
employ_variable_averaging_window = "yes"; # see paragraph below

# End setting constants for this coin

# Modifications to the logic below this point is not recommended.
# Trying to fix something usually breaks something else. 

# Set averaging window based on MinimumHashAttackDuration. 
# N=17 is working in several coins, but it still allows some large on-off mining to rapidly
# "steal" blocks at low difficulty for N/2, leaving constant miners with higher D for N, delaying
# subsequent blocks. N=12 is low but not unwisely low. May cause 3x delays post attack verses 15x for N=17.
# N=8 might be best for small coins still having "hash attack" trouble at N=12. 
# N=8 has only 47% more >4*T solvetimes than N=30.  
# Even 4 can work, even with timestamp errors, if the rise & fall in D is 
# symmetrically limited to 2x & 0.5x per block. 
# There is desire to have low N because hash attacks with 
#  off time >= N and P=on time <=N, I have:
# blocks stolen at low D = P x [1 -(1-X)/2 - P/2N ]
# notice how low N is the only way to reduce attack profit. Stating attack length as Fraction of N:
# blocks stolen at low D = NF x [1 -(1-X)/2 - F/2 ]

N1=int(MinimumHashAttackDuration);
if ( N1 < 6) { N1 = 6; } # due to the way I have TSL, there's more risk to <6. 

# Variable averaging window option:
# It will be smoother most of the time, but takes ~N/4 blocks longer to respond and recover from 
#  sudden hash changes. Choosing the smallest N instead of this option is probably best unless
# you have a strong desire for a smoother difficulty when HR is stable.  
# Precise technical description: 
# Trigger to a lower N if a likely change in HR has occurred.  Checks all possible windows 
# from N1 to 2*N1, linearly decreasing the likeliness from 95% at N1 to 68% at 2*N1 and 
# resets window to the lowest N that triggers. After keeping that N window for N blocks
# to let strange events flush out it raises N by 1 each block until another trigger occurs. 
# It can easily reach 4N as the window if HR is stable.
# This option seems to increase solvetimes by another (1+ln(2)/N) factor which is not
# in this psuedocode for simplicity.

Smax=2; Smin=1; # STDevs range away from mean that will cause a trigger to lower N.

# TS limit to protect against timestamp manipulation & errors. 
if (timestamps_are_provably_correct == "yes" ) { TSL= 10; }   # similar to bitcoin
# next line stops miner w/ 50% forward stamps from lowering  D>20% if N1 is low. 
# steady steady D from miner with X% of network hash less than 50% who who always
# forward timestamping to the max is: SS D= correct D x [1 -(1 - 1/(1+TSL/N1) ) * X]
# Miner with X>50% can drop D to zero w/ forward timestamps at the following rate:
# next D=previous D x  [1/(1+TSL/N1)]^(2X-1)

else { TSL=10 if N1>12 else TSL = N1-1; }

# The following are fail-safes for low N when timestamps have errors.
# Bringing them closer to 1.0 as in other diff algos reduces hash attack protection. 
#  Not letting them be symmetrical may have problems. Example:
# Using maxDec < 1/maxInc allows +6xT timestamp manipulation to drop D faster than -5xT
# subsequent corrections from honest timestamp can bring it back up.
# Bringing them closer to 1 is similar to increasing N and narrowing the timestamp limits,
# but these values should be far enough away from 1 to let low N & TS limits do their job.

maxInc=  2; # max Diff increase per block 
maxDec= 1/maxInc;  # retains symmetry to prevent hacks and keep correct avg solvetime

# End setting of constants by the algorithm.

# definition: TS=timestamp

#  Begin actual algorithm

# Protect against TS errors by limiting how much current TS can differ from previous TS.
# This potentially slows how fast D can lower after a hash attack. The -1 is for complex reasons.

current_TS=previous_TS + TSL*T if current_TS > previous_TS + TSL*T;
current_TS=previous_TS - (TSL-1)*T if current_TS < previous_TS - (TSL-1)*T;

if (employ_variable_averaging_window == "yes") {
     for (I=N1 to N) { # check every window that is smaller than current window.
        # create linear function: STDev decreases as I (aka N) increases to N
         STDevs = Smax-(Smax-Smin)/(2*N1 - N1)*(I-N1); 
         NE = (max(last I timestamps) - min(last I timestmaps)) / T; # expected N for this time range
         if ( abs(I - NE) > STDevs*NE**0.5 ) { N=I;  wait=I; } # This is the core statistics. It's correct.
     }
}
else { N=N1; } 

next_D = sum(last N Ds) *T / [max(last N TSs) - min(last N TSs] / (1+ln(2)/N); 
# the above is same as the following. Credit karbowanec coin for sum & max-min simplification
#  next_D = avg(last N Ds) * T / avg(last N solvetimes) / (1+ln(2)/N)

next_D = 1.2\*avg(last N Ds) if next_D<=0;  # do not let it go negative.

# Increase size of N averaging window by 1 per block if it has been >=N blocks
# since the last trigger to N. This flushes out statistical accidents and < N attacks. 
if (employ_variable_averaging_window == "yes") { 
   if (wait > 0)  { wait=wait-1; } # do not increase N yet
   else {  N=N+1;  }  # resume increasing N every block

# Do not let D rise and fall too much as a security precaution
next_D = maxInc*previous_D  if next_D/previous_D > maxInc;
next_D = maxDec*previous_D if next_D/previous_D < maxDec;

Argument that low N is best in difficulty algorithms and why dynamic averaging window is not a benefit

I can't recommend a switch from v1 to v2 (static N to dynamic N).  The smoothness gained by the higher N is not much:  surprisingly, the std dev of solve times increases only 5% from N=30 to N=8. The std dev of D goes from 0.18xD to about 0.45xD for N=30 verses N=8.  For N=8 this means 97.5% are less than D=1.96x0.45=2 times more than they should be) . Long story short (due to Poisson median being 0.693 of average): going from N=30 to N=8 means only a 47% increase in 4xT solvetimes.  The dynamic window does not capture this benefit:  those > 4xT solvetimes are exactly the statistically unlikely times that will trigger the dynamic window back to a lower N, canceling the primary benefit of it rising back up to large N.  It looks a lot smoother and nicer _**most**_ of the time when hash rate is constant, but the painful small N events are not reduced.