Wednesday, July 19, 2017

A P2P cryptocurrency to replace FB, Amazon, Fiat, and Bitcoin.

Posted to HUSH slack. A prelude to this

Here's an idea for a cryptocoin to build upon the timestamp idea I posted a few days ago (again, that does not necessarily use the stars).

People get more coin by having more "friends" (actually, people you know to be distinct individuals). It might be a slightly exponential function to discourage multiple identities. Your individual coin value is worth more to your "local" friends than to "distant" friends. The distance is shorter if you have a larger number of parallel connections through unique routes. A coin between A and D when they are connected through friends like A->B->C->D and A->E->F->D is worth more than if the E in the 2nd route is B or C. But if E is not there (A->F->D) then the distance is shorter. More coin is generated as the network grows. Each transaction is recorded, stored, timestamped, and signed by you and your friends and maybe your friends' friends. Maybe they are the only ones who can see it unencrypted or your get the choice of a privacy level. Higher privacy requirement means people who do not actually know you will trust your coin less. Maybe password recovery and "2-factor" security can be implemented by closest friends. Each transaction has description of item bought/sold so that the network can be searched for product. There is also a review and rating field for both buyer and seller. For every positive review, you must have 1 negative review: you can't give everyone 5 stars like on ebay and high ranking reviewers on Amazon (positive reviewers get better ranking based on people liking them more than it being an honest review). This is a P2P trust system, but there must be a way to do it so that it is not easy tricked, which is the usual complaint and there is a privacy issue. But look at the benefits. Truly P2P. Since it does not use a single blockchain it is infinitely faster and infinitely more secure than the bitcoin blockchain. I know nothing about programming a blockchain, let alone understand it if I created a clone. But I could program this. And if I can program it, then it is secure and definitive enough to be hard-coded by someone more clever and need changing only fast as the underlying crypto standards (about once per 2 decades?)

Obviously the intent is to replace fiat, amazon, and ebay, but it should also replace FB. A transaction could be a payment you make to friends if you want them to look at a photo. The photo would be part of the transaction data. Since only you and your friends store the data, there are no transaction fees other than the cost of your computing devices. Your friends have to like it in order for you to get your money back. LOL, right? But it's definitely needed. We need to step back and be able to generalize the concept of reviews, likes, votes, and products into the concept of a coin. You have a limited amount dictated by the size of the network. The network of friends decides how much you get. They decide if you should get more or less relative power than other friends.

It would not require trust in the way you're thinking. Your reputation via the history of transactions would enable people to trust you. It's like a brand name, another reason for having only 1 identity. Encouraging 1 identity is key to prevent people from creating false identities with a bot in order to get more coin. The trick and difficulty is in preventing false identities in a way that scams the community.

Everyone should have a motivation to link to only real, known friends. That's the trick anf difficulty. I'm using "friend" very loosely. It just needs to be a known person. Like me and you could link to David Mercer and Zookoo, but we can't vouch for each other. That's because David and Zookoo have built up more real social credibility through many years and good work. They have sacrificed some privacy in order to get it. Satoshi could get real enormous credibility through various provable verifications and not even give up privacy, so it's not a given that privacy must be sacrificed. It should be made, if possible, to not give an advantage to people because they are taking a risk in their personal safety.

The system should enable individuals to be safer, stronger, etc while at the same time advancing those who advance the system. So those who help others the most are helped by others the most. "Virtuous feedback". This is evolution, except it should not be forgotten that "help others the most" means "help 2 others who have 4 times the wealth to pay you instead of 4 others with nominal wealth". So it's not necessarily charitably socialistic like people often want for potential very good reasons, but potentially brutally capitalistic, like evolution.

It does not have to be social network, but it does seem likable social people would immediately get more wealth. It's a transaction + reputation + existence network. Your coin quantity is based on reviews others give you for past transactions (social or financial) plus the mere fact that you were able to engage in economic or social activity with others (a measure of the probability of your existence). There have been coins based on trust networks but I have not looked into them. It's just the only way I can think of to solve the big issues. If the algorithm can be done in a simple way, then it's evidence to me that it is the correct way to go. Coins give legal control of other people's time and assets. If you and I are not popular in at least a business sense where people give real money instead of "smiles" and "likes" like your brother, why should society relinquish coin (control) to us? The "smiles" might be in a different category than the coin. I mean you may not be able to buy and sell likes like coin. Likes might need to be like "votes". You would get so many "likes" per day to "vote" on your friends, rather than my previous description of people needing to be "liked" in order to give likes, which is just a constant quantity coin. Or maybe both likes and coin could be both: everyone gets so many likes and coins per day, but they are also able to buy/sell/accumulate them. I have not searched for and thought through a theoretical foundation for determining which of these options is the best. Another idea is that every one would issue their own coin via promises. This is how most money is created. Coin implies a tangible asset with inherent value. But paper currency is usually a debt instrument. "I will buy X from you with a promise to pay you back with Y." Y is a standard measure of value like the 1 hour of laborer's time plus a basket of commodities. Government issues fiat with the promise it buys you the time and effort of its taxpayers because it demands taxes to be paid in that fiat. This is called modern monetary theory.

So China sells us stuff for dollars, and those dollars gives china control of U.S. taxpayers, provided our government keeps its implicit promise to not inflate the fiat to an unexpectedly low value too quickly, which would be a default on its debt. So your "financially popular" existence that is proven by past transactions of fulfilling your debt promises gives you the ability to make larger and larger debt promises. How or if social likes/votes should interact with that I do not yet know. But I believe it should be like democratic capitalism. The sole purpose of votes is to prevent the concentration of wealth, distributing power more evenly. This makes commodity prices lower and gives more mouths to feed, and that enabled big armies, so it overthrew kings, lords, and religions. Then machines enabled a small educated Europe and then U.S. population to gain control of the world.
=====
see that the Ithaca NY local HOUR coins are a simplified version of what I was trying to invent. The things missing are: 1) digitize it 2) enable seamless expansion (exchange rates) to other "local" communities (in other words, "local" would be a continuous expansion from yourself, to your "friends" to the world. "friends" would have a better exchange rate as they are trusted more. "friends" is a bad word: "trusted market participants" is better. So Amazon (at least for me) would get a high beginning trust setting. There would be an algorithm for determining the exchange rate based on how much your trusted connections trust the secondary connections. Then your own history with secondary marketplace connections (such as buying from an Amazon chinese source directly) would increase your trust of them if your exchanges with them have been good. "Trust" aka "history of good reputation" would be the currency (not "friends"). A missing 3) item is the ability to include a review by both buyer and seller next to the history of exchanges. Your history of exchanges are stored in your most highly trusted connections. Future buyers or sellers wanting to interact with you (or you with them) would be able to see your hisotry of transactions. There would be a setting of how private you want to be. If you want to be intensely private, your exchange rate with distant buyers/sellers would not be as good because they can't verify your reputation. "Reputation" is the primary coin and it would be treated like any other asset. But the creation and destruction of the coin would be managed on a system-wide level so that your reputation can be compared to others, so those with least reputation are weeded out via the marketplace. If you give nothing measurable to society, then you would get nothing from it. You can sell your reputation for dollars or whatever. "likes" might be a 1 to 10 integer that goes beside the "review" field that adds or subtracts from your reputation. But giving likes comes at a cost of your own reputation. I have not worked out the details of this. These likes are just like the 10 signatures on the back of script in the Ithaca NY HOURS coins. So I could learn a lot from their 26-year experiment on how to enable it to expand. They need to be in contact with some really good blockchain devs who could implement something like I'm describing. It could be like an explosion emanating from Ithaca NY that changes the world. Proven there, it could pop-up in other places independently but instantly tap into Ithaca via a few extensions of trust. Extension of trust is the creation of a debt and credit, the source of all fiat-like currency. But by managing the total on a system-wide level without a trusted 3rd party prevents it from being like current gov-backed fiat. Some features: your personal blockchain of transactions is not publicly disclosed unless you want. It is also recoverable and reversible if > 50% of your most local trusted sources agree to your request for recovery. So no permanently lost coins. A thief and those who accepted funds from thieves would lose out. But if you get hacked too much, then the reversals hurt your reputation.


zawy [8:59 AM]
There are several crucial good features this has: 1) there's not exactly a single coin, but a continuous spectrum of exchange rates between reputations in keeping with an evolvability 2) security/protection of value via reversibility by local consensus. 3) The local consensus that determines reputation points and reversals can be penalized by the wider market if it has a reputation of being a bad or dumb consensus. 4) It's not a fixed-quantity coin (quantity of coin is determined by the market rather than an arbitrary decision by core devs, under the constraint of a protocol I haven't defined) 5) there is not a central blockchain which has security, privacy, anonymity, and failure problems. 6) the protocol can have various parameters chosen by the user. The user can chooses his reputation coin's characteristics. The wider market will decide how to value that coin. The users decide parameters that determine how to value other's reputation. I might trust chinese manufacturers to send product more than other people. You could decide this by haggling on price, but auto-searching for buyers and sellers needs you to define how you're going to rate potential candidates. Even the protocol has the potential of being changeable (evolvable). 7) OpenBazaar is not needed because it's inherent to the protocol. If you have a history of selling an item and allow your buyers to make it public, then scans of the network reveal you. Certain requirements are needed such not being able to pick and choose which past buyers can reveal past transactions. 8 ) Besides having "cross chain atomic swaps" and openBazaar built-in via a very simple protocol (Even Zcash-level anonymity might be choosable for individual transactions), I think it could also include STEEM and LBRY objectives as well as smart contracts.

9) government would have to bend over backwards to justify taxing your marketplace reputation. Even VAT taxes might have trouble if every reputation credit you issue creates a reputation debit. This could turn bank manipulation of government against both gov and banks: we are not taxed for taking out loans which enables banks to charge more interest. When we buy a house, our signature to promise to repay the debt is an asset on the bank's balance sheet. This enables them to create money out of thin air via the Fed, which is somehow connected to the FED's overnight interest rate. You pay 6%, bank gets 5%, FED get's 1%, or something like that. The rest of the money (your house's value) came from no-where to pay previous owner, and goes back to nowhere as you pay it off, except for the interest you gave to the banks and FED. Our promise to pay it back is the source of the initial money. Banks might be limited in their ability to do this by reserve requirements. Anyway, the system I'm describing makes your local trusted marketplace connections your bank. They are basically issuing credit to relative strangers by vouching for your reputation to repay. Your local network is taking the risk of you not repaying them. You repay the debt to your creditors via future transactions. The amount you buy must equal the amount you sell. Your expenditures equal your income so there is no net income to tax, as long as you do not convert your reputation credit to dollars. You and your local network have no net asset to be taxed. Any net assets you gain for resell are inventory that is not taxed (if less than $10 M)
I do not propose any mining, but local connections validate and record your transactions (including smart contracts). Everyone "mines" by giving more than they receive. Best summary of the idea: By initially trusting people more than your measurement of their reputation justifies, you are loaning trust to the system that the system will pay back to you. So "trust" is the debit side (what you give) and "reputation" (what you receive) is the credit side of your personal balance sheet that the system records on your local "connections" (these are not simple network peers but people with who you have a history of transactions). Let's say I send you a 2 pound bar of tellurium for nothing except to gain reputation points in the system. I need you to be a part of the system and to record the transaction. That still does not benefit my reputation unless you also gain reputation by buying or selling with others. Then those others and myself trust each other's reputation more since we all trust you. A history with them builds trust without you, so you could default out and things not crash. The trick is for the protocol to keep track of things so that it is not tricked by false identities into unjustly increasing or decreasing reputations. There needs to be a pre-existing trust to get it started. The system does not create any trust. It only keeps track of who deserves a credit of trust from past giving of trust and who owes a debt of trust by receiving goods or services or other likes without trusting anyone.
The only way to get a good reputation is to sell goods or services to someone who is not in your network. You get more reputation if you send the goods or services to someone who is not in anyone's network, provided they subsequently add others to their network who are not in yours. This should only add to your reputation after the fact only 1 level and decreases after they've added a few, so it's not a pyramid scheme. The goal is not to reward you for bringing in others, but reward you for making a real sale to a real independent person (not your personal friends who did not receive anything in return) who will use the system on their on. This is the same thing as "burning" something such as human labor (in antiques) or computing resources. Nick Szabo has also stressed the importance of the age of an item and it's history of use as a currency as increasing its value. So the length of time someone has been holding and building reputation without violating trust would add value to their reputation. This causes some added value for early adoption and for sticking with the system. The formulas for calculating reputation need to be derivable by statistical theory or determined by the marketplace.

Saturday, July 15, 2017

Best difficulty algorithm: Zawy v6

This page will not be updated anymore.

See this page for the best difficulty algorithms


# Tom Harold (Degnr8) "wt-144" 
# Modified by Zawy to be Weighted, weighted Harmonic Mean (WWHM)
# Zawy-selected N=30 and timestamp handling for all coins.
# No limits in rise or fall rate should be employed.
# MTP should not be used

# set constants
N=30
T=600 # (target solvetime)
adjust=0.98 # 0.98 for N=30
k = (N+1)/2 *adjust * T

# algorithm
d=0, t=0, j=0
for i = height - N+1 to height  # (N most recent blocks)
solvetime = TS[i] - TS[i-1] 
solvetime = 10*T if solvetime > 10*T
solvetime = -9*T if solvetime < -9*T
    j++
    t +=  solvetime * j 
    d +=D[i]
next i
t=T if t < T # in case of startup weirdness, keep t reasonable
next_D = d * k / t 
and apparently better and amazing in that there's not even a loop or looking at old data:

=======================

# Jacob Eliosoff  EMA (exponential moving average)
# ST = previous solvetime
# N=15 (Zawy-selected)
# MTP should not be used

ST = previous timestamp - timestamp before that
ST = max(T/50,min(T*10, ST))
next_D = previous_D * ( T/ST + e^(-ST/T/N) * (1-T/ST) )

The following is older text. The important stuff is above.
# Zawy v6 difficulty algorithm 
# Newest version of Zawy v1b
# Based on next_diff=average(prev N diff) * TargetInterval / average(prev N solvetimes)
# Thanks to Karbowanec and Sumokoin for supporting, testing, and using.
# (1+0.67/N) keeps the avg solve time at TargetInterval.
# Low N has better response to short attacks, but wider variation in solvetimes. 
# Sudden large 5x on-off hashrate changes with N=12 sometimes has 30x delays verses 
# 20x delays with N=18. But N=12 may lose only 20 bks in 5 attacks verse 30 w/ N=18.
# This allows timestamps to have any value, as long as > 50% of miners are
# approximately correct and as long as timestamps are ALLOWED to 
# be out of order to correct bad timestamps. 
# Miners with >50% can be prevented from driving difficulty down to 1 if
# nodes do like bitcoin and have a median time and forbid blocks to have a timestamp
# more than 2 hours ahead of that time. 
# For discussion and history of all the alternatives that failed: 
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
#
# D = difficulty, T=TargetInterval, TS=TimeStamp, ST=solveTime

N=16;  # Averaging window. Can conceivably be any N>6.  N=16 seems good for small coins.
X=6;  # Size of expected "hash attacks" as multiple of avg hashrate. X=6 for new small coins.

# An X too small is unresponsive. X too large is subject to timestamp manipulation.
# The following is how X is used.

limit=X^(2/N); # Protect against timestamp error. Limits avg_ST and thereby next_D.

# Instead of X and limit, there can be a limit on the individual TS's in relation 
# to previous block like this:
# R=6; # multiple of T that timestamp can be from expected time relative to previous TS.
# Then nodes enforce that the most recent block have a TS:
# TS = TS_previous_block +T+ R*T if TS > TS_previous_block +T+ R*T;
# TS = TS_previous_block +T-R*T if TS < TS_previous_block +T - R*T;
# 
adjust = 1/(1+0.67/N); # Keeps correct avg solvetime.

# get next difficulty

ST=0; D=0; 
for ( i=height;  i > height-N;  i--) {  # go through N most recent blocks
   # Note: TS's mark beginning of blocks, so the ST's below are shifted back 1
   # block from the D for that ST, but it does not cause a problem.
   ST += TS[i] - TS[i-1] ; # Note:  ST != TS
   D += D[i];
}
ST = T*limit if ST > T*limit; 
ST = T/limit if ST < T/limit; 

next_D = D * T / ST * adjust;   

# It is less accurate to use the following, even though it looks like the N's divide out:
# next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs];

=============== post to Bitcoing Gold github: That was Digishield's reasoning. In reading the history of the Digishield development, it gives the impression the asymmetry caused problems, so they added the "tempering" to "fix" it, maybe not realizing this fix was just making it so slow the 32/16 became irrelevant. Either way, the main problem is the opposite: not returning to normal difficulty fast enough after a big hash miner leaves, causing long delays between blocks. Bitcoin Cash tried to solve this by doing the reverse asymmetry of dropping a LOT faster than it rises. This has caused oscillations and issuing coins too fast, and a few blocks every 2 cycles with really long delays. Asymmetry in the allowed rise and fall will change how fast coins are issued at the least, requiring an adjustment factor. Rising fast protects your constant miners, although if a large miners come on and off at the right times and have a bigger coin to always return for a base profit, they can always get 1/3 of the coins issued at "zero excess cost" in difficulty (the difficulty algo was not rising fast enough to adjust to the increase in hashrate). The only thing that can help is to have a shorter averaging window to respond faster, but it turns out this also allows more frequent accidental drops in difficulty and if they simply attack more often for shorter periods, they can still get 1/3 of the block for "zero excess cost". Approximately, they just need to attack for 1/2 a window averaging period and stay off the next full averaging period, or just choosing when difficulty seems a little low on accident. Dropping fast prevents a lot of long-delay blocks after an attack and prevents your constant miners from suffering a long period of high difficulty. By leaving in the +/- 16% limit I am only trying to prevent catastrophic attacks on the timestamp. For example, if the code keeps bitcoin's node-enforced 2 hour limit on how far forward miners assign timestamps, and if a pool has >50% hashrate, then after a few blocks they would "own" the MTP (median time past) and can set it to 2 hours ahead of time (12 blocks). Zcash will likely reduce this to 900 seconds which is close to the 1000 seconds I recommended before they launched a year ago. Their current limit might be 3600 seconds. It appears BTCG copied Zcash's difficulty code. It should be kept in mind Zcash is 2.5 minute blocks, so if BTCG is using a stricter time limit than BTC like Zcash, it should not go below 3600 seconds. Zcash can do a 900 second limit because that is 6 blocks for them. An equivalent time in BTCG is 3600 seconds. With N=40 like I've proposed, the 2-hour limit would allow a miner with 10x the normal hashrate to make the difficulty think it needs to drop to 40/(40+12) = 77% of correct difficulty when they begin to own MTP. After 12 blocks difficulty would be low by ```40^12 / [(40+12)*(40+11)*(40+10)*(40+9)*.... = 17%``` of the normal difficulty which is only 1.7% of the correct difficulty if they have 10x the normal hashrate. By limiting the drop to 16% per block, difficulty will get down to 43% instead of 17%. A tighter limit of +/- 12% instead of 16% may be good (69% would be the low). This is with bitcoin's 2-hour limit. I think BTCG has copied Zcash so maybe it is reduced to 1 hours. The +/- 12% is stricter than a 1 hour limit, so changing from 2 hour to 1 hour will help at a limit like +/- 16%, but not make a different at +/- 12%. A 1 hour limit on time with no other limit would allow a timestamp attacker to get difficulty down to 61% which is why I said the +/- 12% in allowing 69% drop is stricter (better). The two don't combine to help. Using the MTP like Zcash and probably BTCG does prevents < 50% miners from manipulating the timestamp. But it makes the difficulty 5 blocks slower in responding. There is a fix to this that would require more code changes. See my [Zawy v6](http://zawy1.blogspot.com/2017/07/best-difficulty-algorithm-zawy-v1b.html) I'll show the +/- 12% (or 16%) does not prevent the N=40 from responding as fast as it can. ( I'm going to edit my previous post to recommend 12% instead of keeping the 16%. ) Let's say an attack has 10x the normal hashrate. With N=40, the avg time it takes the difficulty to completely respond to meet the challenge is 40 blocks. So it will rise, on avg, this much per block: 10^(1/40). In my testing, it appeared a limit on the rise equal to 10^(2/40) = 12.2% was only reached about 10% of the time. I don't expect BTGC to experience a 10x "attack" very often so 12% with N=40 seems correct. Another way to reduce the effect of timestamp manipulation is to limit how far the next timestamp can be from previous timestamp. I've found a good choice to be +/- 6*T from where you expected the solve to occur and where T = 600 seconds for BTCG. You expect the solve to be 600 seconds from previous timestamp, so you would limit the timestamps to 600 +/- 3600 the previous timestamp. This allows timestamps to be out of order which is important in Zawy v6, but if BTCG does like Zcash and uses the MTP protection / delay AND the nodes are enforcing the +3600 limit based on real time instead of comparing to the previous timestamp, then you can set the minimum to 1 second after the previous timestamp. Otherwise, without the nodes enforcing a real UST time limit, a miner with with >20% hashrate could drive difficulty to "0" in a few hours or days if a "negative" timestamp from previous one is not allowed even if using MTP and a 3600 forward time limit.

Without nodes enforcing real time and letting miners set the time, any >50% attacker can drive difficulty to zero with any algorithm. BTW if you have a real time available to nodes, you do not need consensus (i.e. POW mining) because you could create a synchronous deterministic network which does not have the Byzantine of FLP problems.

The +/- 6*T limit works out to be about the same as the 10^(2/N) limit. They overlap, so it is not an additive benefit.

I tried many different schemes for difficulty such as a dynamic averaging window, least squares fitting, and most-recent-block-more-heavily-weighted. Nothing worked better than simple:
```next_D=avg(past N D) * T / avg(past N solvetimes) / (1+0.67/N)```
with the two options for solvetime limits above (+/- 3600 on each solvetime or X^(2/N) and X^(-2/N) on the average, where X is the expected max hash attack size as a multiple of baseline hashrate). The (1+0.67/N). Note that ```next_D= sum(N D's) * T / [max timestamp - min timestamp]``` as is usually used is not as accurate if timestamps are being manipulated. The implied N's in the denominator of my averages will not cancel during a manipulation as this alternative equation assumes .

Difficulty has a seductive illusion of being "improvable". Any "fix" that tries to predict attacker behavior without employing a symmetrical "fix" to counter him acting exactly the opposite (and everywhere in between) will leave an exploitable hole or cause an undesirable side effect. Any fix that is symmetrical is limited in scope before it has undesirable side effects. We want fast response to changes in hashrate and a smooth difficulty when hashrate is constant. My best theoretical approach was a dynamic averaging window in Zawy v2 that triggers on various measures detecting a change in hashrate. For complex reasons, this still does not do better than simple average.

========
post to Zcash github:

Any upper limit you apply to timestamps should be reflected in a lower limit. For example, you could follow the rule that the next timestamp is limited to +/- 750 seconds from the previous timestamp +150 seconds (+900 / -600). If you don't allow the "negative" timestamp (-600 from previous timestamp) AND if miners can assign timestamps without a real-time limit from nodes, then a miner or pool with > 20% of the network hashrate can drive the difficulty as low as he wants, letting everyone get blocks as fast as he wants, in less than a day.

A symmetrical limit on timestamps allows honest miner timestamps to completely erase the effect of bad timestamps. ( You do not need to wait 6 blocks for MTP like Zcash does in delaying the use of timestamps for difficulty, see footnote. ) If you allow the symmetrical "negative" timestamps, you do not need nodes to have the correct time with NTP or GPS unless miners collude with > 51% agreement on setting the timestamps further and further ahead of time to drive difficulty down. It's a real possibility if miners decide they do not like a certain fork due to not providing them with enough fees.

But if you do not allow the apparently negative solvetimes, you better do like ETH and depend on 3rd parties for your node times in order to limit how low a timestamp manipulator can drive your difficulty.

But if your nodes have an accurate time, you do not need mining. The only fundamental reason for mining is to act as a timestamp server to prevent double spending. If you have an accurate time on all nodes, then you can make it a synchronous network to eliminate the need for consensus to eliminate the need for byzantine protection via POW.

BTC and ETH depend on nodes to limit the future time assigned to blocks. Zooko was the only one here who seemed to know there is something wrong about strong reliance on nodes having the correct time. The extent to which BTC and ETH need those forward-time limits to be enforced by real time is the extent to which they do not need mining.

Footnote:
MTP does not stop a 25% attacker who can set timestamps > 4 blocks ahead if other miners are not allowed to assign a "negative" timestamp to eliminate the error in the next block. But if you allow the "negatives" then MTP is not needed. Putting your tempering aside, this assumes you use

next_D = avg(D's) * T / avg(solvetimes, allowing negative solvetime)
instead of

next_D=sum(D's) * T / [max(Timestamps) - min(Timestamps) ]
because the N's of the denominator and number of the first equation do not cancel like you would think and hope (in order to use the second equation) when there are bad timestamps at the beginning and end of the window. With the MTP, your difficulty is delayed 5 blocks in responding to big ETH miners who jump on about twice a day. That's like a gift to them at the expense of your constant miners.

Also, your tempered N=17 gives almost the same results as a straight average N=63. I would use N=40 instead, without the tempering. It should reduce the cheap blocks the big ETH miners are getting.

Your 16% / 32% limits are rarely reached due to the N=63 slowness. This is good because it is a symmetry problem, although it would not be as bad as BCH. Use "limit" and "1/limit" where limit = X^(2/N) where N=63 for your current tempering and X = the size of the larger ETH attackers as a fraction of your total hashrate, which is about 3. This allows the the fastest response up or down at N for a given X with 80% probability. Change the 2 to 3 to get a higher probability of an adequately-fast response. The benefit is that it is a really loose timestamp limit on individual values, as long as the aggregate is not too far from the expected range.

Monday, July 10, 2017

Doing better than the simple average in cryptocoin difficulty algorithms

I am still trying to find a better method than the simple avg, but I have not found one yet. I am pretty sure there is one because estimates of hashrate based on avg(D1/T2 + D2/T2 + ....) should be better than avg(D)/avg(T) if there is any change in the hashrate during the averaging period. This is because avg(D)/avg(T) throws out details that exist in the data measuring hashrate. We are not exactly interested in avg(D) or avg(T). We are interested in avg(D/T). The avg(D/T) method does not throw out details. Statistical measures throw out details. You don't want to lose the details until the variable of interest has been directly measured. I learned this the hard way on an engineering project. But avg(D/T) does not hardly work at all in this case. The problem is that the probability distribution of each data point D/T needs to be symmetrical on each side of the mean (above and below it). I'm trying to "map" the measured D/T values based on their probability of occurrence so that they become symmetrical, then take the average, then un-map the average to get the correct avg(D/T). I've had some success, but it's not as good as the average. This is because I can't seem to map it correctly. If I could do it, then another improvement becomes possible: the least squares method of linear curve fitting could be used on the mapped D/T values to predict where the next data point should be. All this might result in a 20% improvement over the basic average. Going further, sudden on and off hashing will not be detected very well by least squares. Least squares could be the default method, but it could switch to a step-function curve-fit if a step-change is detected. I just wanted to say where I'm at and give an idea to those who might be able to go further than I've been able to.

Numenta's CLA needs 6 layers to model objects

posted to numenta forum
====
Back when there were only 2 white papers and a few videos I became interested in the HTM and saw a video of a 2D helicopter being detected and wondered about the relation between the layers they used and the ability to recognize objects. I remembered 6 equations with 6 unknowns (the degrees of freedom) are required to solve the dynamics of 3D rotation and translation. The layers of the helicopter HTM matched with what it was able to detect if they were unknowingly being used in a subtle 2-equations and 2 unknowns methodology. Of course this begs the question "Are the 6 layers in the cortex required to see the 3D world?" Numenta's view of the cortical column implies that the 6 layers have nothing to do with this but I would like to question that view. Jeff has also warned against pursuing the reverse black hole question no one has ever escaped: "Is the 3D world the result of a 6-layered brain?" But an understanding of the relation between mass and space-time prevents me from abandoning the reverse question. More importantly, physics has an elephant in the room that is rarely acknowledged and questioned: the only integers that appear in physics are the result of 3D spacetime and Feynman states no fundamental aspect of QED requires an extension beyond 1D. QED is sort of the core of all physics except for gravity and nuclear stuff. An expert in the area informed me that spin is what creates 3D space, so my line of questioning is suspect. But my view is that we may have invented spin to maintain the view that objects are independent of our perceptions. I admit I am immediately deep in a recursive black hole: the 6 layers is a mass of neurons that I'm proposing we can see only because we have the 6 layers. BTW, if we had 10 layers to support the perception of 4D objects in 4D space then I believe all velocities would be static positions and all accelerations would be velocities. instead of E + mc^2 = 0 we would have E+mc^3=0 (now really getting side-tracked on the physics: by keeping relativity units correct there is a missing negative in some equations. Another example is F+ma=0 where the "F" is more correctly defined as the reactive force of the object which is in the opposite direction of the "a". This comes from meters=i*c*seconds which comes from Einstein's "Relativity" appendix 2 which he stated allows use of Euclidean instead of Minkowski space-time which is in keeping with the Occam's razor requirement.)

What I'm suggesting is falsifiable. Others posting here will know if it takes 6 layers to fully recognized objects in 4D space time. The degrees of freedom is N translational plus N(N-1)/2 rotational. I tried testing the theory via observation and thought of ants. It seems to be supported there: their eyes that need to detect only 2D "shadows and light" without rotation have roughly two layers. And yet their feelers and front legs, having to deal with 3D objects in 3D space, have 6 layers. There's a great extension to this observation: wasps are the closest cousins to the ants and have 6 layers for their eyes.

I posted this question nearly a decade ago in the old forum, but I'll ask again. Is a 6 layer HTM required for fully characterizing 3D objects in 4D space-time?
=====
I think a single layer would require a lot more new training on every object. For example, it sees a circle moving about and learns its behavior. Then it turns sideways and turns out to be a cylinder, and then it starts rotating, so training has to start over. I don't think it could conceive very well "this is the same object" and/or generalize the lessons learned on past objects to future objects. It just seems like it would have difficulty understanding objects like we do. I believe 6 layers would be able to perceive the laws of dynamics but 1 layer would not. These six layers are not an HTM but the foundation of a single cortical column. Each CLA layer of the HTM would require the 6 layers. So the CLA would need to be redone if you want it to think like mammals and see like wasps. The motor control of layer (5th layer of cortex) may serve may also serve part of this "inherent object modelling", not just motor control. The motor control part might be crucial to developing the concept of inertia (mass). Mass is another variable ("dimension") which implies 7 layers should be present. To get out of that mathematical corner, I have to conjecture mass is something special in the modelling like "the higher dimensions that 6 layers can't model and that have permanence".

I do not mean to say that 6 layers is necessarily inherently needed in A.I. to be superior to humans even in the realm of understanding physics, but that it is needed to think more directly like animals. But if 6 layers per HTM layer is actaully needed for a higher intelligence, then 10 layers to do 4D space should be even more powerful. 15 layers are needed for 5D. I do not accept the conjecture that objective reality, if there is one, depends on a specific integer of spatial dimensions like "3".

The visual cortex by itself with its 6 layers does not seem to have any concept of objects, but I think the 6 layers are still needed for encoding the information so that the concept of the objects is still extractable by the higher levels in the "HTM" of the brain (e.g. frontal lobes). But the concept of an object seems to be possible in the 6 layers just "behind" the eyes of flying insects: wasps certainly have a better concept of the object nature of people than ants, judging by the way they identify and attack. Ants are virtually blind to what people are, except for detecting skin and biting.