Wednesday, June 28, 2017

Argument that low N is best in difficulty algorithms and why dynamic averaging window is not a benefit

I can't recommend a switch from v1 to v2 (static N to dynamic N).  The smoothness gained by the higher N is not much:  surprisingly, the std dev of solve times increases only 5% from N=30 to N=8. The std dev of D goes from 0.18xD to about 0.45xD for N=30 verses N=8.  For N=8 this means 97.5% are less than D=1.96x0.45=2 times more than they should be) . Long story short (due to Poisson median being 0.693 of average): going from N=30 to N=8 means only a 47% increase in 4xT solvetimes.  The dynamic window does not capture this benefit:  those > 4xT solvetimes are exactly the statistically unlikely times that will trigger the dynamic window back to a lower N, canceling the primary benefit of it rising back up to large N.  It looks a lot smoother and nicer _**most**_ of the time when hash rate is constant, but the painful small N events are not reduced.

Tuesday, June 27, 2017

Cryptocurrency difficulty algorithm postulates


Here are my "difficulty algorithm postulates" I want people to consider before creating or changing a difficulty algorithm.
  1. For a given hashrate with gentle variation, the simple average below is the best algorithm: 
    • Next D = avg(past N Ds) x TargetInterval / avg(past N solve times)
    • For whatever reason, it needs an adjustment factor for low N to keep solve time on track and make D more accurate:  Next D x 1/(1+0.7/N).  
    • The N used for averaging past D must be set to the N used for past solve times.
    • Using median is not near as good as using average and there is no benefit to using median.
  2. A faster response to hashrate changes will come at a cost in solve time stability. This is not a bad thing. Use the lowest N you can tolerate to get the fastest response. Low N causes large non-attack solve time variation. Consider down to N=8 if hash-attacks are a problem.
  3. Limiting the rise and fall in the difficulty per block is similar to increasing N, but is much less accurate.
    • I place limits on the rise and fall to be equal to what I think possible only as a security measure. 
    • Placing limits on the rise and fall to block an event you do not want is denying the truth of the observation that you have asked the average to report.
  4. Enforcing asymmetric limits on difficulty and timestamp changes are risky.
    • There is a temptation to allow faster decreases than increases in the difficulty per block (that results from the average above) in order to get back to normal after an attack. This may help keep block emission rate on schedule and reduce normal miner loses.  But it also enables attacks to resume more quickly which might exactly negate the two benefits. Avoid this more seriously if the attacker is intelligent.  If timestamps are assigned by miners, forward-stamping (combined with this asymmetry) will make D begin artificially lower in the next attack, amplifying the original problem instead of helping it. But if the allowed increase and decrease in D is symmetrical, then a subsequent accurate timestamp that negates the previous bad timestamp will be able to get D back to its proper value.
    • Conversely, there is a temptation to allow faster increases than decreases in difficulty per block in order to dissuade on-off hash attacks.  This directly will slow block emission rate. It potentially increases normal miner loses if it does not actually dissuade attacks. Avoid this more seriously if the attacker is dumb. It better enables a malicious attack that is not interested in profit to drive the D up, or to drive it up for the purpose of causing future oscillations if the diff algo is unwisely advanced and complex..
    • Limiting the amount the timestamp can be ahead of time more than it can be negative is like allowing D to increase faster than decrease, with the same type of side effects.
    • Limiting the amount the timestamp can be negative is like allowing D to increase faster than it can decrease, with the same type of side effects.
    • Symmetrical in the above is not exactly linear because the median of a Poisson with mean TargetInterval (T) appears to be ln(2) x T = 0.693% of  T but I have not addressed this.
    • Timestamp limits: I believe the forward timestamp should be limited to +6x and -5x previous timestamp instead of my previous statements of +6x and -6x because the "expected" timestamp is 1x, so  +6x and -4x is mathematically required.  But I want a -5x limit in violation of perfect symmetry out of more fear of greedy +6x occuring than -4x accidents or maliciousness. Reminder: two -5x in a row could cause a negative difficulty if N=8, so there needs to be protection against a negative difficulty. 
  5. Despite 3 and 4, there may be a way to use them to enable D to return to normal more quickly in post-attack.  This is really needed because the avg solve time increases (delaying coin release) when there are a lot of large on-off instances because even with low N, D is getting stuck high in post-attack.
  6. There is no way to stop >50% miners from using the timestamp to make difficulty = 0.  This assumes there is not a trusted third party enforcing a clock (like ETH) which is in violation and Szabo and Satoshi mandates. 
    • Might is right. 51% power is truth.
    • 51% (or the trusted 3rd party) controls the clock which means they control coin emission rate.
    • Bitcoin uses > 50% consensus to certify not only single-spend transactions but also the time.
    • Fear of a hard fork may be what prevents miners from doing this overtly.
  7. Difficulty algorithms should not have momentum. Predictive algorithms that look at the slope of recent changes in D to estimate a future D are vulnerable to large on-off miners (and possibly even an accidental and unconscious consortium of miners in search of personal profit) who can force the algorithm into oscillations, turning on when D is low and is starting to rise, and off before it reaches a peak. This is the Derivative part in PID controllers. PI controllers such as the average of the past are safer.
  8. Algorithms that try to protect against specific attack behavior are inherently vulnerable. 
    • It should be assumed that protection against specific attacks is automatically leaving an unexpected hole.
    • If an opponent can see the strategy you've employed that assumes something beyond your scientific observations, he can change his plan of attack but you can't change your defense.
    • For example, if you choose a fixed N based on how long you expect attacks to last, the attacker may make the attacks shorter but more frequent. 
  9. Miners acting in their own best interests weed out weak coins, are the mothers of invention, and/or are encouraging adoption of a single coin. Each of these might be "good" instead of an "attack".
    • Item 6 may mean all coins that are not the largest for a specific type of hardware are destined for a limitation on their size (if not outright failure) that is more brutal than Zipf's law.  We currently see something like Zipf's law in cryptocurrency market caps but if item 5 is correct,  it might become 1/Rank^2 or worse in market caps instead of 1/Rank.  This enforces Satoshi's original vision that the largest coin's "might is right" will make it less subject to attack than its clones. 

Sunday, June 25, 2017

Advantage of a simple difficulty algorithm with low N, but a tweak is needed

The following looks like a good zawy v1 to consider. If this is acceptable even when the hashrate is constant, then the switching from high N to low N  (zawy v2) is not needed.

- N=6
- The low N causes avg solvetime to be 15% longer (i.e. D is 15% too high), so increase the solvetime average by 15% with 1.15 x avg(6 solvetimes).  I believe this is caused by the combination of a low N and the median of a Poisson being 0.75 of the average.
- +/- 5x timestamp limit from previous timestamp.
- D per block change limited to 0.5x down and 2x up.
- The above two should remain symmetrical unless testing supports something different.  See last paragraph.
- There should not be any attempt to address bad timestamps other than the above TS and D limits. If a bad time comes in that lowers D to the 0.5 limit then the next correct timestamp will bring it back up.  If you're getting two bad timestamps in a row in the same direction very often, the problem is not fixable. Larger N might help.
- Always averages, no medians.

The timestamp manipulation will only help pools with > 50% hashrate. Other miners should keep their clock accurate to keep difficulty more stable when N is low.  Otherwise, every miner suffers from an unstable D: an unstable D causes blocks to be released more slowly:   the avg of a D ranging from 0.5 + 2 is always more than the avg of a D that stays at 1.  I have not addressed this averaging effect and it leaves open a possibility of improvement:
When there are frequent on-off high hash attacks, the above N=6 (and any other algo that provides good protection) will slow the avg solve time. I see a >30% slowing of block release with frequent on-off attacks.  Slow increases and decreases are not a problem: the sudden on-off nature of "attacks" slow block release even as they get blocks quickly at low difficulty. So it's a double-whammy to honest miners.  This leaves open the ability to reduce the post-attack delays by accelerating the decline back to normal if it thinks an attack was present and is now ending.  Off-hand, I would keep a rolling avg D for something like N=100 and if the past N=12 were >2 avg(100 D) then
if Target/(1.15\*avg(6 ST))  < 0.95
change
next D = avg(6 D) x Target/(1.15\*avg(6 ST))
to
next D = avg(6 D) x Target/(1.15\*avg(6 ST)) x 3/4

The above is kind of a weak fix. I am trying to employ a different method, such as more rapidly being able to detect a post-attack event in a continuous fashion. Discrete if-then statements expose the weakest point to an attacker.  But the idea is to **show higher skepticism toward allowing D to increase as D gets higher and higher above a longer-term average, but less skepticism towards coming back down.**  I think this could be an improvement that is more important that the dynamic avg window.

In the crypto-anarchists worldview, miners taking advantage of the lack of a timestamp and difficulty algorithm not adjusting fast enough are really just capitalists investing in adventures that persuade others to employ fixes.

Saturday, June 24, 2017

Equations cryptocoin hash and timestamp attacks

I remain confused that alt coins are not having a huge problem with >50% miners constantly forward-stamping the timestamps, causing difficulty to drop to zero. I have the following equations to describe the problems:
Let X = % hashrate a big miner has.
The unavoidable unfixable problem I see is that the difficulty will drop each block for any X > 1:
next D = previous avg D x [1/(1+M/N)]^(2X-1)
M=the "M x TargetInterval" limit on the allowable timestamp, after the previous timestamp.
N=the averaging or median window.
For X=50% note that next D=previous D. See next equation for X<=50%.
For any X>50%, the difficulty will slowly decrease. Small M and large N help, but are not a fix. For example, M=3 and N=100 (the most protective numbers I can imagine), and X=66%, the difficulty will drop 1% per block for as long as the miner is present with 66%. In 450 blocks new D = 1/100th of old D instead of 3xD.
Median makes it more complicated, but I think it is almost exactly the same, at best (if the correct median is used which is 0.75 of the average).
For X<=50%, the difficulty does not drop forever, but has a steady state value that is tolerable:
steady state D = correct D x [1 -(1 - 1/(1+M/N) ) * X]
for example, M=6x, N=30, and X=50% gives 92% of the correct D.
another example: M=6x, N=15, and X=0.5 (33% of hashrate) gives 91%.
I needed this because I want to go to N=8 in the dynamic averaging window:
M=8x and N=8, X=50% gives 75% which is tolerable. This is a basis for TSL setting in Zawy v2 algorithm
For high on-off hash attacks where the off time >= N and P=on time <=N, I have
blocks stolen at low D = P x [1 -(1-X)/2 - P/2N ]
There is no solution except to make N small.The std dev of solvetimes starts getting noticeable as a drawback if N goes below 12.
This is why I'm working on a variable averaging window difficulty algorithm. N must be as small as possible when there is an attack, but larger at other times because small N causes a lot of accidental variation.

Friday, June 23, 2017

Protecting alt cryptocoins from bad timstamps and high hash rates

This is a possible solution to protect small coins against 51% miners who forward-stamp the time.

It appears any miner with 51% can cause difficulty to lower forever.  Median and average do not affect this.  1/2 the time he will acquire the median, so half the time he will lower it to 1/6 D, averaging 1/3 D every 2N.   So D=0 in 1 day.  If you cut the high 6 out of 42, as well as the low 6, the result is the same.  Cutting only the high 6 will cause difficulty to rise forever if miners are honest.  Again, 51%  = truth.   51% = owner of clock.  There is no solution except for the coin to self-destruct until miners are honest.

If I am correct, big miners must already know this, so they must be doing it only some of the time because they do not want it to cause too much trouble and then cause a hard fork.

Timestamp fix idea:  penalize all miners if some miners have a bad timestamp.

This will protect against a 20x attacker who is trying to control the clock. This numbers are approximate.It depends on how fast rest of difficulty algorithm brings difficulty back down.  If a negative block timestamp appears, raise difficulty 10x and lower 11% per block for 20 blocks, then resume regular difficulty.  If more than ~5% of network has a bad timestamp (either too far backward or too far forward that causes others to be backward), difficulty would rise indefinitely. Those not aware will stop mining. Those aware of this will remain until the miners with bad timestamps leave. It forces honesty and more stable difficulty. All miners are penalized if some miners keep a bad timestamp. 1% bad timestamps will cause 22% increase every 100 blocks.

Past coin holders are protected against coin dilution.

If miners have an error of either exactly  +20 or -20 seconds then 25% of the time a forward stamp will be followed by a backward stamp.    1.24% of solves are within this 40 second window when T=240.     0.25 of 0.0125 = 0.3% of the time this will occur on accident.  So everyone with ~20 second error is 1/3 as bad as 1% with 240 second error.  These numbers are approximate since the statistics is complicated but it shows the things that have to be considered.

A problem with the above is that a malicious actor with only 1% hashrate can make your difficulty jump 10x once every 100 blocks.

Tuesday, June 13, 2017

Difficulty algorithm and time warp attacks

I spent a lot of work on trying to figure out the best difficulty algorithm for Zcash and two other coins have run into my comments.  This is a summary of what I found out. It was a comment on github.  Frankly, I can't believe the amount of work people have spent on designing difficulty algorithms when they should have just been using a simple average, and I'm glad to see coins are starting to use the simple average, and would like to think it is due to my work.


After much consideration, I strongly recommend a simple average of the most recent solve times. By increasing N, you get stability at the expense of responsiveness. Digishield was using N= 17 or N=18 in a more complicated way. Last I checked, this is what Zcash was doing (N=17) but they might be doing something different. People still refer to them as using a "modified digishield v3", but it's just an average that I believe came as a result of my pestering them about it.
Next difficulty = (avg last N difficulties) x (target solve time) / (avg of past N solve times)
As a result of improper times being reported by miners, the "average" solve time is not used, but the median is. If you have accurate solve times, use the average.
I could not confirm the Zcash code was getting a good median (I could not trace the variables back to their origins), but it seemed to work.
The lack of a known time allows time-warp attacks especially when mining is concentrated. It was observed in the Zcash testnet. To help minimize this, you limit how far ahead or behind the previous time that the miner can report their time. I believe Zcash made the limit same as bitcoin, 3600 seconds, but the limit should have been 1/4 of bitcoin's since their blocks come 4x faster. 4x more blocks in that time limit means more opportunity for a time warp attack. The attack seems to apply only when mining is concentrated, having at least maybe 20% of network power. Also, it is used if electricity or computer time is their main expense. They crunch a lot to get maybe up to 1/2 the blocks in 1/2 of N, so that the calculated median is artificially old because they are reporting old times to trick the algorithm into lowering the difficulty. They continue for another 1/2 of N which is when the algo finally figures it out, thanks to the limit of how far back they can set times. Then they stop to let others suffer a high difficulty for 1 N, then they do it again. I think that's a good summary.
This is not to say a smaller N prevents or reduces a time warp attack. The median and the limit on the reported time helps reduce the possibility of it occurring. I'm describing the attack in order to check for it by watching the times being reported and to see if it makes the difficulty oscillate. You look for a series old times being reported on the scale of 1/4 to 1 N , then it stopping for a while to be repeated later.
The problem with many difficulty algorithms is that they try to "think too much" instead of just looking at recent data, and going from there. If you try to look at a recent increase in hash rate and let the difficulty "jump ahead" of the actual recent average in a predictive manner, then it invites oscillations, both natural and intentional. A similar thing occurs if you try to limit the increase or decrease in the difficulty, like Digishield did, especially if the limits are not the same (symmetrical). Just looking at the avg of the most recent past is the most scientific method.
You may be fully aware of all the above, but I wanted to distill what I had found out in the past.

=======
The above does not explain this exactly. This post to bitcointalk is a little more clear on time warp attacks.

where Zcash uses N=17.  N is not too critical. Larger N is less responsive to network changes but more stable.  

I think Zcash retained some semblance to Digishield v3 by retaining a limit on the increase and/or decrease. This caused oscillation problems in Digishield V1, V2, and Zcash testnet. I was arguing for them to remove those limits completely.  I think they made them high enough that they are mostly irrelevant. But it's better and more scientific to just stick to the observed average.  If a big pool (10x hash rate increase) jumps on for an hour and then off, the penalty of everyone having to wait 10x longer for the next 8 blocks is the best option. If a limit is placed on the difficulty rise, they get blocks cheaper for longer and too many coins will be released per day (assuming you do not make other miners pay the price for the ill-gotten gains by making the fall equally slow).  The core problem is the lack of a valid timestamp. Change the word "median" in the equation to "avg" if you're not like bitcoin and Zcash and are somehow getting a known timestamp.  

If you're using 150 seconds between blocks, the maximum-allowed miner-reported time between blocks should be 1/4 of bitcoin's to minimize time-warp attacks. I think Zcash did not change the max time from bitcoin (3600 seconds?) even though their blocks are coming 4x faster. Time warps seems to be a danger only when there's a potential big miner (>15% network hash rate).  A big miner would want to cycle through small coins so that he has no down time while getting coins with a artificially low difficulty. It's made a little easier if they're all using N=17. Otherwise it seems only beneficial if his primary expense is computing time or electricity. 

Just sticking with Zcash is probably fine, but I hate seeing people refer to Digishield because it implies limits on the rise and fall when it's the simple "scientific" average that works. Limits on the rise and fall are a political and ideological influence, wanting to prevent users from suffering long transaction wait times. But it's a violation of the science that has a cost elsewhere (either the coin is released too fast, or small miners suffer). To be more "scientific" N=30 would be good. It would be more stable but less responsive. That's better than keeping a limit on the rise but has a similar effect.

======
To clarify and correct another post to github


I think N from 14 to N=50 might be OK. I would select N=30 to make difficulty more predictable for everyone and there is a statistical standard in letting 30 be a cut-off. A lot of research is immediately rejected if N < 30. Large N also makes it more obvious a big miner is present. It does not theoretically affect big-miner "cheating" because if N is small, they just come back more often. They profit during N/2 then have to go away for N/2 to wait for the median to go back to normal. Since it's the avg of N difficulties, it is better for them to wait N. This is without a time-warp attack. It's just what they can do if they have a large percentage of hashrate. There is no fix. It's only profitable if they can easily switch what coin they are mining, or if computer time or electricity are largest expense and it's OK to let their equipment go idle (unlikely).
I think 6x the target solve time to copy bitcoin should be the maximum difference in the timestamps. 6x4x60 = 1440 seconds. I think it is risky to adjust the median by combing it with the average, if I understand your comments.
I've said N value is not critical, but it may interact with the 6x. I can't figure out if N=17 or N=30 is better if the limit is 6x the 4 minutes. So I'm not sure N=30 is better.
A timewarp attack can be seen if timestamps are set as far ahead in the future as possible to cause the difficulty to drop, then they'll mine full power and set timestamps as low as possible. The difficulty will stay low until the median time-to-solve catches up to the forwarded time. This is why 6x is loses half as many blocks to the attacker as 12x. You can see the actual hashrate of the attacker by looking at how fast block solves are coming into your personal node. If he has 2x the network hashrate, you'll see solves coming in about 2x faster than your 4 minutes.
I think other miners are paying the price when someone has a lot of hash power and is cycling through several alt coins when it sees a low difficulty. So the other miners have to suffer a higher difficulty when they leave, and do not benefit when the difficulty is low. But it's hard to call it an attack. The big miner profits from the difficulty accidentally being low, and their actions cause the difficulty to rise and fall more adding excess variablility to solve times. If the big miner stayed around all the time, the other miners would get the same amount. So the other miners are paying the price in the sense that if they have 1% of average network hashrate, they get something like only 0.66% of the blocks if the big miner mines N/2 and then waits N to return. So he needs to find 3 small alt coins with N=17. Again, this is not related to a time warp attack.
I may have made an error in my previous post: The timewarp attack may only cause more coins to be issued than the protocol wants (not really harming other miners directly). It's an attack on the value of everyone's coins, not on other miners, unless they are combing it with cycling through to other coins. It's an "attack" because it is lying about the time. It's an unsolvable deficiency in the protocol in not being able to know the real time itself. It's unsolvable without relying on a trusted third party, like a group of peers it trusts (a trusted consensus) which I believe is what ETH does to get a an accurate timestamp.
The following is an actual timewarp attack on the Zcash testnet. The first chart is the rate at which blocks are being issued. The target is 24 per hour. You can see too many blocks were issued during the attack. The second chart shows a positive spike when the timestamp was set >2000 seconds into the future from previous block. It shows a negative spike when they blocks had timestamps less than 10 seconds apart. The darkness of the downward spikes shows they got a lot of blocks quickly. They stopped when the difficulty returned to normal.:

================

Your equation:
Next = (avg past 17 Diff) x 240 / (A + 1/4x(A - M))
is same as
Next = (avg past 17 Diff) x 240 / (1.25xA - 0.25xM)

Avg of past 17 or 16 Diff does not make much difference but if you're dividing by M and A of 16 then I would definitely also use 16 for Diff.
I don't like it because of the heavy reliance on the Avg for the reasons I gave in a previous post that describes the effect of using A instead of M. I like my previous equation better (at the bottom of this post I'll explain a problem with this and give another idea):

Next Diff = (avg last N Diff) x T / (Median of past N solve times)
with limits:
Max timestamp = Median-(N/2+0.5+6 )x T
and
Min timestamp =Median+(N/2+0.5+6) x T
The 0.5 is because Median is used. It's negative is N is odd.
T= target time = 240 in this case.
From deep thought over many days I came to the conclusion that the best when there is no timewarp attack is
Next Diff = Avg N Diff x Target / (Avg N SolvesTimes)
Timewarp (timestamp error) necessitates median and limits. The median is ugly, but it is as close to avg as you can get so I strongly favor the median. I saw how "insane" digishield v1 and v2 was in using rise and fall limits on Diff, so I'm trusting myself more than (1.25A-0.25M). My equation with the 3/4 is wrong, but I was thinking the code was using (0.75xM+0.25(most recent)) which seems OK but I still like the bold equation best.
The problem with my equation above is that it ignore a sudden increase in hashrate for 8x240 and then suddenly responds. So a smart big miner will want to come in for 8 blocks, then leave for 8 or 16 blocks to go to another alt, then come back. I don't know if they actually do that. Maybe N=12 is good, but it will have a lot of accidental variation. Here's an alternative in order to be more responsive and cut time warp profit in half
Next Diff = (avg of last N Diff) x T / (1/2 M + 1/2 A)
where M and A are median and average of last N block solve times.
Bitcoin takes forever in responding to hashrate changes, so median having an 8 block delay is a lot better than bitcoin even if it's not responding fast.

So my desired T/A and your T / (1.25A-0.25M) are basically the same thing but my timewarp argument is against these. Str4d on Zcash told me they used T/M instead of T/A because of timewarp and I had to think about why and came up with the explanation above. Since I can't read their code, maybe they actually use the same as you because they have that 1/4, so maybe they really do use something close to T/A.
I think p and q = 0.5 is fine. Your current algo might be fine. But be sure to use 6xT as a limit, and allow the negatives.
An idea I have that also slows responsiveness but dulls the effect of bad timestamps is to use the same idea that "the avg of truth and lies is better than risking two lies in a row" when measuring block solve time. Instead of block solve time = (current timestamp) - (previous timestamp) an idea I am just "throwing out there" is:
block solve time = (current timestamp) - [ (Avg of past N/2 timestamps) + (N/4) x T ]
So this skeptically dulls any extreme high or low solve times. The 6x limit would be enforced on the output of this equation, not the input.

======
What I've described may not be a timewarp attack. Here is a post that says it requires ignoring the interval between two blocks like bitcoin does which means it depends on not using a rolling average. In their example, they reset difficulty every 3 blocks. We are talking about a rolling average, so it's really different.
My description of the timewarp might be all wrong. Let's say and honest timestamp on block 1 is 0. The attacker takes ~240 but assigns 1440 for block 2. Then miner with an accurate takes ~240 and assigns ~480 for block 3 because it is two blocks after block 1. But the block solve times you have are 1440 and (480 - 1440) = -960. So the average of the two is 240. So maybe the M should not be used. I would just use A.
My "avg of a larger number of truth and lies is better than risking two lies in a row" is referring to a method of how to enforce the 1440 limit, or how to assign the block solve time. But because of how it works out nicely above, maybe that should not be tried either.
The problem I see with median is that it does nothing for N/2 then suddenly reacts. The output I'm seeing from it is not really good. The 1.1 factor has to be 1.5 if it's 100% median. As far as I know, I was completely wrong in sending you guys down the median path. p=1.25 and q=-0.75 has to use 0.9 correction.
time to 1 occurrence = -240*ln(rand()) is the random time of a single occurrence that will result from a Poisson distribution that has a mean of 240. Poisson is simply based on a standard random variable, so RAND() is what should be used. Derivation is below.
To include the 1440 limit I had to create another column F with =if(A1>1440, 1440, A1). Then let the median and average columns point to column F instead of A, but keep my measurements of the output and charting on column A. For example, to know if the solve time is averaging 240, do an average on all of column A.
I'm still working on trying to get the oscillations out of these averaging methods. I'm hoping the reason they seem to oscillate with 2N intervals is because the D x T / A method assumes a linearity and symmetry by using the A and that this is a false assumption in Poisson distributions.
Derivation:
I was lucky to find -ln(rand())*240 on the internet and I did not see a derivation, so I'll give one here.
The Poisson distribution is based strictly on the same basics as Gaussian and binomial. Gaussian is a continuous version of binomial. Binomial is a coin toss. Poisson is Gaussian in a sort of backwards way and summed up. That is why a simple formula like this can apply. The Poisson is not esoteric, but a direct "consequence" of this simple formula,
time to 1 occurrence = -T*LN(RAND())
which is apparently all you need to know to derive all Poisson results. But I want to do it backwards from wikipedia.
From Wikipedia Poisson 1st equation
P of k occurrences when on average there is L = 1 occurrence per some T = e^-L * L^k / k!
L=mean occurrences per interval = 1/240
k=observed occurrences in L
L can be use decimals but k must be an integer.
P is a random variable that is between 0 and 1.
This equation is also the "probability mass function".
We want random times as they vary around 240 with k=1. So the equation simplifies to
P = e^(-L) * L^1 / 1
Assume the mass function nature of it allows me to substitute
L=L*t and therefore
P(t) = e^(-Lt) * Lt
Now integrate P from t=0 to t=t.
With an integration table and log rules, the integral yields
Integral(P)/t = e^(-t/T)
Notice that the left side is an average P expected for a given t when k=1. Rearranging:
t= -T * ln( avg P per t)
Which is the equation I seek: RAND() appears to supply an average P per t. This makes sense but avg(P/t) is a weird kind of ratio.
================
The best I can find after trying many things is this:
next D = Avg of D x (T/A)^2
The cycles seem less and it responds to attacks a lot faster. It varies more but it does not go as low. It is kind of a hornet that I would think about twice before attacking. By not following other diff algos, attackers would not know as well what to do with it. It causes the difficulty to jump about 2x higher on occasion than without the ^2.
For constant hash rate, the frequency of of seeing solve times that are 240/2 is 6x more often than seeing solve times that are 240 x 2. The averages reduce this effect, but I believe this could be what causes the oscillations. I can't find a solution. I've done curve fitting to adjust each solve time before applying the average and tried adjusting the average, but I can't do better than the simple average. I've tried about 10 other things, using adjustable constants in many different ways.
N needs to be > than the length of time a flash attack occurs. They get N/2 for free.

Bitcoin timestamps:
Bitcoin wiki says:
A timestamp is accepted as valid if it is greater than the median timestamp of previous 11 blocks, and less than the network-adjusted time + 2 hours. "Network-adjusted time" is the median of the timestamps returned by all nodes connected to you. As a result, block timestamps are not exactly accurate, and they do not even need to be in order. Block times are accurate only to within an hour or two.
So they have something like a 6x to 12x. timestamp limit. I believe the method I've described is better (and is probably more strict so that if the same rule applies to cryptonote using BTC code, then my limits will override BTC). The reason I think it is better is because a manipulating of the timestamp may also have better control of the nodes connected to it and therefore the timestamp variation they will allow. So an honest miner with different node connections would not be able to reverse the bad timestamp as in my scheme.
Some knowledgeable says "if you have an accurate timestamp, you do not need mining" and people like Szabo have referred to using block height as their clock. I think if every peer has an accurate clock, then updating a distributed database without update conflicts ("double payment") is easy. For example, you could update the blockchain every minute and anyone wanting to make an update would have the first 15 seconds to submit changes and if you tried a "double payment" on the same coin in that 15 seconds then both payments would be rejected. The only reason GPS and time servers are not used is because people didn't want to rely on them. But nodes could use star trackers to occasionally validate their GPS signal and revert to the star trackers to correct their CPU time if GPS time failed and use 10 minute block time if phone cameras and accelerometers are limited to 2.5% accuracy. Tracking the stars for economic reasons goes back as far as knowing when to plant.


Tuesday, June 6, 2017

Bitcoin: relation between coins, contracts, and law

Nick Szabo lists stages of contracts:
The contractual phases of search, negotiation, commitment, performance, and adjudication constitute the realm of smart contracts
A coin can fall under this. He talks about the contract being an algorithm that is executed, but I'm going to discuss it as if it is a static object. A program is an object, but I want to look at it as simply defining ownership of  "things", where "things" is very general, and how a coin and law relate to them.

A coin is a contract between all users, telling them the coin gives an amount of power to acquire assets, including people's time. All assets that have an owner also have a contract somewhere in the system that defines the owner.  So contract = ownership of an asset(s), including people's time.  So the market place buys and sells contracts with other contracts, such as with a coin. In this generalization, "possession" defines ownership in accordance with a legal contract between society and its individuals.

Individuals do not have to sell their assets (i.e.contracts declaring their ownership) for the coin (barring legal verdicts against them) so the coin may not have the same power to acquire a specific "instance" of a contract as it does a similar "instance" that is in the marketplace. The power to refuse selling the "ownership contract of an asset" at any price is the flip side (the yin of the yang) of being able to buy it in the marketplace at a competitively-low price.

The coin contract allows a measurement of the relative value (in the marketplace) of other contracts.

If it is an intelligent coin that takes measurements of the system and reacts to the measurements, it can help the system of contracts (over which it has control) to become more powerful. The primary if not only goal it should have is to maintain constant value in time and space so that the value and terms of other contracts have a persistent and reliable measure of value. This means that if the system of contracts over which it has control expands, it should expand to keep constant value.

The coin quantity should intelligently expand to promote good contracts and to cull bad contracts. Good and bad should be defined and measured by the coin's semi-rigid protocol who's modification is not subject to unintelligent politics or mere possession of the coin.  A good coin is one that never changes its value. That's its promise to the other contracts. Good contracts are those that, as a group, expand in their ability to acquire energy to move matter to enable more contracts to be created.

A coin might be a measure of a fixed amount of Gibbs free energy (E=U-TS aka available work energy) per mass and each other contract represents ownership of an amount of Gibbs free energy. So the strongest society might be the one that has the most Gibbs free energy under its control. However, the value of a contract in the marketplace depends on speculations about how much Gibbs free energy it will be able to acquire in the future, including its ability to "steal" from other contracts by negative-sum activity.  A different group of contracts should enforce law that restricts marketplace activity to combat this. Law can change contracts and who owns them (including the coin) and it can do this even without the coin. The law can find the coin helpful but does not depend on it.  The coin benefits from the law, but can't help write it. It might be best if the coin protocol can't be changed by law or the marketplace.

An intelligent coin promotes intelligent contracts and intelligent contracts promote an intelligent coin. As the power of contracts expand, the coin must replicate in order to keep constant value.   The mutually-beneficial intelligence between contracts and the coin is what is "good" about both.  The coin's protocol is the source of its intelligence. It takes macroeconomic measures and responds to them.  Law is the source of systemic intelligence in the marketplace. Competition with the help of the coin and under the constraint or help of the law is what gives individual contracts their increasing power and efficiency in acquiring ownership of more and more Gibbs free energy.

So there's the coin, contracts (non-coin asset ownership), and law (marketplace and contract restrictions).  All are contracts (agreements aka requirements) between individuals and the system.

Instead of constant Gibbs free energy per mass (coin) and marketplace-estimated Gibbs free energy (contracts), the quantities being measured might be specific entropy (coin) and entropy.

Expanding coin quantity (by changing production parameters) when success is observed is relatively easy compared to intelligently changing the protocol itself (such as changing the feedback formula itself beyond simple parameters).

Most efficient speed of a car going uphill

When ignoring rolling resistance, acceleration, and optimum engine RPM, the most efficient uphill speed is the downhill coasting terminal velocity.

Derivation: Assume there is no rolling resistance so that the force against the car from friction is all air resistance.

F1 = cv^2

where c is the drag coefficient for given car and air conditions. The energy lost to air friction is:

E1 = dF1 = dcv^2 where d is the distance travelled.

The net work energy in reaching the top of the hill is:

E2 = mgh where h=d sin(angle)

The energy the engine must provide is:

E1 + E2 = dcv^2 + mgd sin(A).

Let d = t/v where t is time that d was travelled:

I want to find the v that minimizes E1+E2, so take the derivative with respect to v and set it to zero:

0 = tc - mgt/v^2 sin (A)

v = SQRT(mg sin(A) / c)

I can measure c by letting the car roll downhill. Let constant V be the terminal speed the car rolls downhill which is when the air resistance equals the force downward.

F1 = F2 = cV^2 = mg sin(A)

Solving for c and plugging into equation for F1 gives

v = V