Wednesday, July 26, 2017

Ideal currency's relation to thermodynamics of Earth

posted to reddit thread

Again, the goal is value stability, which Nash said should be constant. This does not mean constant quantity. From online sources I can't see where I'm in disagreement with Nash, nor how the ICPI can be built backwards from currency competition because he says the ICPI "could be calculated from the international price of commodities". Even my suggestion that money should devalue very much like a 2% inflation target is not an idea that he throws out: [wikipedia again] "The policy of inflation targeting, whereby central banks set monetary policy with the objective of stabilizing inflation at a particular rate, leads in the long run to what Nash called ‘asymptotically ideal money’ – currency that, while not achieving perfect stability, becomes more stable over time." Moreover, the mild inflation idea as a way to slowly erase debts is something very important that Nash's "ideal money" does not take into account, so from what I see he's missing something crucial. It is possible his ideal money (perfect stability in value) is "ideal" only when the marketplace is ideally guided by "ideal law" that prevents money from concentrating into fewer hands in addition to enforcing basic rule of law in each transaction. Wealth can concentrate from loans, monopolies, and lobbies improperly affecting laws. It can also accumulate from efficiency gains due size, but a healthy system requires competition and dispersion, so I suspect allowing a company to gain over 50% of market share should be "made against the law" even if they followed all other aspects of ideal law. What to do with things that need 100% market share like roads and electricity remains a problem (letting two toll roads compete for the same route is rarely reasonable).

But wealth concentration into a few hands seems to be what a completely free market wants. Progressive tax to redistribute it back out is used in most countries. People get very wealthy as a result of riding society's wave of progress and technology more than any particularly great or noble intellect or skill. Instant billionaire status at age 25 after 4 years of work and near-zero capital investment makes no sense. Getting back to my point: mild inflation is a another check on inefficient, less powerful systems that allow wealth concentration. In ancient times, wealth concentration stifled society, then the people would revolt, install a new king or priest then all debts to the old "lords" weer erased. Jews learned to do it as a matter of course from other societies by declaring "jubilee" every 50 years. Now we have inflation instead.

I can agree with "ideal money" instead of mild inflation if a different fundamental assumption is made: if the wealth accumulation is prevented by something like progressive tax and smart laws on loans, then forcing constant value (zero inflation) on humanity would force everyone to become more efficient before expanding and go a long way to slow population growth and destruction of the biosphere (and in particular, not run into resource constraints so quickly, causing population collapse).

But in reality faster expansion and waste that is helped by low inflation will overpower the ideal money's more conservative expansion.

I am not sure currency should or needs to follow Zipf's law, but it appears it does, and therefore probably should. I agree competition in currency will result in an ideal currency, but knowing what the ideal outcome for people will be (or rather, the most likely-to-succeed outcome) is how you design the ideal money from scratch and then make it available. It also helps me in investing: I am more bullish on bitcoin if it forks. These ideas cause me to predict it will fork several times.

It must fork in order to achieve stable value as its use expands, either by alts or hard forks (which I guess is another name for an alt with a big pre-mine). But the ideal is to find an objective definition and measure of commodity prices and figuring out the best way to expand and contract the supply as the prices change. I agree defining and measuring is a big problem. There must be an objective theoretical way to define "commodity" and the weighting factors needed. An even bigger problem is measuring it without a 3rd party. If the market is big enough does everything become a commodity? Should they be weighted based on how much people spend on them? I believe it also needs slight inflation to help prevent concentration of wealth and the loan problem.

The consumption (destruction) of joules is sometimes used by people like Szabo and XCP (and implied in bitcoin and b-money) to claim "this object has value because a lot went into its creation". Rarity and antique-ness are used in a similar fashion. Trading a commodity itself that required joules to create and maintains economic relevance of the joule's spent (e.g., copper, etc) seems to be a much better idea. If we could trade a basket of commodities around with the ease of a fast cryptocoin, then we have an ideal.

The starting point for how to weight the commodities is my previous suggestion: a coin = a fixed percent of Gibbs free energy available in society divided by the number of people. By replacing my initial "joules" with a ratio, I've potentially removed the necessity of it being related to joules. But joules seem to be the primary thing that creates and runs commodity producers. To be clear, 1 coin out of 100M coins would represent 1/100M control of the "total current commodity output".

% coin owned of the total in "circulation" = % of society's total commodity-producing wealth divided by # people

So if commodity production capacity and number of people is constant, then 1 coin is worth more in joules than in the past. So maybe I should not mention joules to others, but it's important to my attempts to connect physics to evolution and economics. ( The release of entropy from Earth is deeply connected to the expansion of the universe. I've work on that part as much, but it requires an audience that is already aware that the 2nd law of thermodynamics is not that "entropy always increase" (see Feynman's famous intro to physics books the 1st or 2nd thermo chapter) and that entropy is conserved on a comoving volume basis. )

The velocity and availability of money and how to measure these will affect this in a way that I have no idea on how to measure. What is the total coin that is available for use at any given time? It might be locked up in escrow based on contracts. How do asset price fluctuations fit into this? Just watching commodity prices seems to solve all that.

If the economy has gotten bigger as measured by commodities trying to become cheaper, then the government would print new money to build more infrastructure to support the bigger economy (law, schools, roads) if the increase appears permanent, but it should be by lower interest rates if it seems like a temporary swing. The government should be the only bank and primary loan agent. Interest on loans would replace all taxes on middle class.

Nash's ideal money seems to miss the final key ingredient of "value" that I've included: dividing by the number of people. It has the macroeconomic effect of weeding out weak people because coin (commodity-producing joules per person) increases for everyone if the number of people decreases. There would be a macro-economic invisible hand to not overpopulate. It stays constant if every person is more efficient but the joules each coin represents would be more. So our measure of value has an intrinsic aspect as to what other people have. If you consider this I believe it will resolve the "efficiency" problem we both mentioned. So even with the low inflation I seek, the ideal money could discourage overpopulation and overly wasteful expansion.

The end result of all this is that Earth's mass will continue the current unavoidable physics-based path of congealing and hardening until all biology is replaced by machines. The primary goal of devs is to replace brains just as motors replace muscle, and solar cells are replacing photosynthesis (20x more efficient per meter^2 and more efficient at self-replication). Silicon is better than wet brains because the lower entropy per mass of hard silicon allows precise control of electrons whereas the brain has to deal with ions that weigh 50,000x more. Photosynthesis has 20x less ability to control the electrons via photon energy.

The goal of companies is to produce the most with the fewest people and with the fewest shareholders (stock buybacks and venture capitalists). People are so outdated, even their capital is not needed by the machines except to gain market share (hotmail buyout, youtube buyout, snapchat, etc). AMZN is an exception. See the zero marginal cost society. Welfare has been disastrous for the people in the U.S. receiving it, as bad as a 30 year war on that population. The same thing will happen to the rest of humanity as we are no longer needed. There's no good or bad. There's no solution. It's just the matter on Earth congealing to a colder, harder state in keeping with thermodynamics.

Gibb's free energy E=U+pV-TS is the combination of entropy and energy that decribes how much "useful work" can be acquired from a thermodynamic system. In a chemical reaction you subtract the Gibbs free energy of the reactants from the products and if it is negative then the reaction is spontaneous. I suspect this is the cause of evolution and economics but I have not worked out the details of what it is about our unusually large moon's effect on oceans and the mantle that has enabled Earth to have its entropy per mass continually lowering via life and economics instead of a simple hot body like other planets. In evolution, economics, and artificial intelligence, energy coming from outside the thermodynamics system (the Sun, moon, fossil fuels, and nuclear energy) causes entropy (S) in the economic, evolutionary, or A.I. system to lower while releasing excess heat and entropy to the universe. ( Fossil fuels and nuclear energy are coming from outside of today's biosphere rather than outside the Earth, so they are still "from outside" and fossil fuel are just old Sun energy). The Earth is a closed (not isolated) thermodynamic system so its entropy can decrease. The entropy of the mass that is under the control of our economic system is decreasing on an "entropy per mass" basis. The metals, silicon, and carbon fiber all had oxygen removed from the elements in order to make the materials stronger. Stronger is always lower entropy because the atoms can't be in as many vibrational states because of their positions are more rigidly fixed. It is a massive reduction in entropy per atom. The strength is not only for building strong structures such as bone, buildings, and roads, but also for enabling precise control of the electrons in wires and silicon that enables CPUs, electrical motors, and photosynthesis. Our brains are stuck with moving ions that weigh 50,000 times more. This is why silicon is 20x more efficient at capturing sunlight energy than photosynthensis, why electrical motors are >100x more efficient than muscles (on a cost basis but only 6x on direct energy basis), and why CPUs are 1Mx more efficient than brains: the lower entropy per atom. (a gram of silicon can learn voice recognition in 30 seconds what it takes ~100 grams of brain forever to learn). DNA itself is a rigid crystal so life also has needed the low "specific entropy" or "molar entropy". A constant value currency has a quantity of coin that is equal to the Gibbs free energy under the control of the legal system that protects and guides a marketplace that depends on that currency, divided by the number of economic agents in that system. For example, an A.I. that uses a marketplace of competing agents (aka nodes) must issue a coin that is in proportion to the amount of CPU time and memory space that is available to the agents. The agents/nodes use the coin to request resources for expansion or reproduction. The CPU and memory are like the commodities in our economic system and the nutrients in our blood stream that supports our neurons (neurons compete against each other to establish synapses).

The wiring in neural net A.I. (and probabilities in Bayesian A.I.) and the synapses in brains are the prices neurons and nodes charge other neurons and nodes to activate a signal. A.I. systems buy and sell electrons the way economics buy and sell mass. The electrons in A.I. systems and the ions in brains are ultimately trying model the movement of much larger quantities of mass (all ideas like love are ultimately based on mass moving in space-time) so the A.I. system (including the programmer and user) can make better use of the external world's mass to make a copy of itself. A.I. systems not doing this will not be a part of the future. The coin is what ties all the competing agents together. A fixed quantity of coin per gibbs free energy available in an economic system per competing economic agent forces the agents to lower the system-wide entropy per mass of the system. This is a spontaneous, unavoidable result of physics when there is energy coming into the system and entropy leaving the system (Earth).
When I say "constant value currency = Gibbs free energy per person in an economizing system" I mean the "Gibbs free energy" = energy available for work = energy - entropy = energy + order = ordered energy. The usefulness of energy depends on the form it is in as well as the pre-existing order of the matter it needs to move. For example, oil in the ground is not as valuable as oil in a tanker. Gold in an vein is not as useful as gold in sea water. So the order in the mass of commodities has value like energy commodities due to making itself more amenable for energy to move. A.I. systems like evolution and economies use energy to move mass to make copies of themselves to repeat the process. More precisely, the historical position of mass and potential energy gradients cause matter to form self-replicating way. Genes, brains, and A.I. are not forces that do anything of their own free will, but they are the result of forces from pre-existing potential energy gradients that created them. They are enzymes that allow energy and mass to move in an efficient direction, not forces. The following is mathematically exactly true: Intelligence = efficient prediction = compression = science. I am referring to the "density" of each of these, not the total abilities. For example, science seeks to predict the most in the least number of bytes. The least number of bytes is known as Occam's razor in science and is the 2nd of two fundamental tenants of science. The first is that observations should have the potential to prove a theory wrong (falsifiability), and that those observations always support the theory (reproducible observations). So the 1st science tenant is prediction and the 2nd is compression or efficiency. Total currency in an A.I. system = bytes / time that are destroyed in CPU computations and memory writes. Every computation in a CPU and memory write to RAM or a hard drive generates heat and entropy. The theoretical minimal entropy per byte destroyed is S =kb * ln(2). kb is boltzmann's constant. The minimal heat energy created (the energy lost) is Q = Temperature * S. In economics, the "bytes" are "dollars" that represent energy spent like a CPU computation to create a mass of a commodity (like the storage of a byte). When we write to memory in A.I. we are creating value that can be used in the future. Typically the writes are assigning weights to the connections in neural nets or the probabilities to a Bayesian net or making copies of a gene in genetic algorithms. Bytes in evolution are DNA. The bytes in our bodies are cellular energy like glucose and energy stored in the crystals of DNA. Energy-based commodities are spent to create mass-based commodities that are used for an economic system to replicate (expand), just like evolution and A.I. Total currency is the total available commodities per economic cycle. Approximating a constant number of economizing agents like people or neurons in a brain or nodes in a neural net means that the currency is also a bytes per economic cycle per economizing agent. Agents compete for the limited number of bytes in order to increase the number of bytes per agent per cycle. Coins are bytes that represent a percent ownership of the total commodities available per economic cycle per person.
coins = bytes = DNA = synapses = economically beneficial arrangements of atoms and potential energy

Or rather, the 1st four are elements in a computational systems that are used to represent and assign ownership of the last one.

Friday, July 21, 2017

A few random thoughts on Bitcoin and cryptoassets

The proportion of hash power a coin has for a given POW across all coins is probably going to be proportional to all real world assets that fall under the legal control of those coins that use that POW (that is hardware-constrained...if the same hardware can be be used on 2 POW's then in this context they are the same POW because the work is real, being the Joules to create and run the equipment). There's real value in being the biggest kid on the block because it makes it harder to break. 90% of alts right now could be destroyed if the largest pool of the largest coin (for a particular POW) decided to attack and forward timestamp every block to the max. Difficulty goes to zero and all coins are released (or as many as the attacker wants) in a few hours with only 60% hashrate. There's no fix. If they fork, he does it again. The motivation is to drive users to the big coin that he has been accumulating and he wants to retire instead of buying more equipment, not to mention constantly selling all the small alt coins before they figure it out and fork. This should eventually happen to all coins that have a halving schedule. So the underlying value is being the biggest kid on the block because it is the only kid that can't be "hacked". So side chains may displace alts. The cross swaps do not change this. They will stabilize to an exchange rate in accordance with it. If I'm right, miner economics could quickly reduce the playing field of coins that are truly peer to peer. The only alternative to being one of the very few biggest and baddest is the traditional central database. So BTC's biggest use would be between governments and banks that do not trust each other.

BTC value =(BTC FLOPS) / (crypto FLOPS) x (crypto assets) / (paper assets) ?

It needs to reach $500k per coin to replace the world's $10T M2+ dollar supply, but it's not a currency. But it will more likely find it's value in anchoring the value of other cryptos that will represent legal control of real-world assets. The world's assets are said to be $223 Trillion. so it could not be worth more than $10M per BTC (plus world asset value increases). But if someday crypto become the legal instruments representing ownership of the 1/2 assets, and 1/10 of them ultimately require BTC as a basis, then again I'm back at $500k per coin. If it is both currency and asset, then $1M per coin in an optimistic view. $10k seems to be a decent exit point, but the trend and publics lack of knowledge says at least $50k. So $100k is not a bad guess as a dreamy max.

What's wrong with a BTC fork other than preventing us from getting more free money for nothing? I mean, if it doubles the number of coin so that new entrants can afford it and so that more transactions per second can be made, how is a fork not best for society? All currencies must expand as their use increases to keep constant value so that the terms of wage and price contracts remain valid. Economies should not allow a change in the value of it primary coin anymore than we should tolerate a changing definition of the Joule. All equations (contracts) are invalidated or require adjustment if there is a change. If it does not expand, the currency will not be used by new entrants who will go elsewhere. If its use is forced by government, then limited quantity coins create a 1% class.
I think he might have estimated that if it did absolutely everything he could dream of, then a Satoshi would be $1 each in 2009 dollars. That would be $200T, which was the world's 2009 assets. This is a reasonable distant hope because only the biggest blockchain can be the secure definition of truth because its miners could destroy any other non-3rd party, non-oracle chain with a 51% attack (or timestamp forwarding). Every world asset that wants to define its owner (or vice versa) in the most secure way without a central authority must reference a BTC transaction.
The inability to inflate the references faster than the increase in world assets prevents the references from losing their value. The constant quantity aspect is kind of a cheat and marketing. Early adopters profit at the expense of late adopters to the "faith". Currency has always been directly connected to government / religion. It seems to be about deciding which "God" is in control of the assets via the references. Our ideas of "freedom" originate from the universal ancient tradition of debt cancellation. Bad rulers kept debt in place until the economies were chocked to death. The people would bring a new ruler (or religion) to power and he would erased all debt. That erased the power of the people who were in control and supporters of the previous ruler.
reddit message sent to nullc (gmaxwell)
There is an unavoidable, unnoticed macro-economic reason for the fighting: Society wants and needs BTC to expand. BTC wants to fork. Devs think they are fighting to keep it together, but they are actually fighting because macro-economic principles want it to fork. I do not want to have to choose between alts. I want BTC, but I want it to expand with it's use so that it becomes a constant store of value. This will optimally help society by enabling me to write contracts that reference it as "joules of value". Even if it always increases, my renters and I can't use it because we don't know the rate at which it will increase in value. Destruction of joules via making and running hardware is not the defining aspect of "joules of value". This problem goes back to the inability of b-money to equal a basket of commodities as Wei Dai's indicated.
New users want lower price. Old speculators do not mind getting 2 coins for 1. Miners won't mind splitting up to get more coin even it if each is half price because hashrate competition is half. Everyone just has fear of the unknown, and a mistaken belief that capped quantity is best.
Some developers think they have a better idea when deep down they want central control and notoriety. Other's know they have a better idea. Both sides don't realize BTC wants a fork. Let the ideas compete. Let there be a fork and let the market place decide the relative value of each. I'm going to keep both indefinitely because they will settle to Zipf's law (#2 =~ 1/2 value of #1) as ETH crumbles from not following Satoshi's axioms of simplicity and non-3rd party timestamp.
You can derive all characteristics of an ideal money from one underlying goal: constant value. An approximation is a basket of commodities. Going deeper, this approximation is based on the "joules" required to construct and run the commodity-producing equipment. But the "joules" is not exactly a physical measurement. It includes a "difficulty" factor that society has in acquiring and utilizing them. More precisely, it is Gibbs free energy. This may not be the ultimate measure because there is an "efficiency" aspect in how it's used even if the machine is 100% efficient in converting it to usable energy. There might be a subjective nature to how people need to define "constant value". The currency at least needs to expand with the size of the economy to keep constant value. The ultimate goal is to keep the terms of contracts valid. But there is something else just as important: it needs to slowly devalue so that hoarding is discouraged and investment is motivated. This prevents successful participants from relaxing. Evolution does not seek fairness. It seeks power. The most efficient participants retiring is not the goal. It is just a carrot to motivate them. Slow devaluation is a way of erasing old debts which drives a rejuvenation of economic activity, redistributing wealth away from the 1% who naturally use wealth to guide markets into corners or to simply loan out at interest above the rate of coin inflation until they own all the coin via the "magic" of compound interest, further stifling growth from lack of coin and having more people to come beg for a loan.

But the initial and short term goal is constant value so that CONTRACTS have a reference point that is exchangeable between all other contracts in any given legal system that is securing law in a marketplace. The value should be constant in space and time, with some caveats like the needed inflation above, and remote places that do not have a large and diverse marketplace (the coin will have more value there).

That is the background you'll need to understand the following. If BTC is going to be a currency instead of an asset (which provides the backing for real currencies), then it needs to fork as its use expands in order to maintain constant value, or it will have to let alts take away a greater and greater share of the growing cryptocurrency market.

So your point is correct only if BTC is going to be an asset. Let currencies reference it in large quantities in a slow manner in something like the lightening network. Assets and currencies are diametrically opposed if the asset has a capped quantity. BTC and everyone who will use it and currently holds it want BTC to fork into many coins to maintain constant value if it is going to be a currency. That is the underlying cause of the arguing.

I think you failed to appreciate Nash's "value stabilization". Bitcoin is not a commodity. The joules needed to construct and run the mining equipment are a destruction of value but real mining adds value. A real stable-valued commodity is not one that is not going to dry up in the near future. It stops being a commodity in a useful currency sense when that happens. Granted, BTC as a commodity has a lot of features like gold: expensive to mine verses its economic utility and largely capped in quantity (at least without a new tech advance in gold mining). But this is why gold is more useful when a law-abiding marketplace is either non-existent, stagnant, or dying. Growing economies do not need or want gold except as a safety hedge against disaster (see previous sentence).
I disagree that a measure of value in a market place is unavoidably arbitrary. If the "Gibbs free energy" connection is not fundamentally correct, the basket of commodities is pretty good. I believe it (or both) are founded upon letting the currency quantity (adjusted for velocity) be proportional to either the total non-artificially inflated assets in an economic system under its control which might be proportional to the "important fluid assets" of some sort (like the commodities). I think by being proportional to what a group of people have under their "total relative control", it gives a gut feeling to how people define value. If they become rich in having more assets nuder their control then it takes a larger quantity of the currency to feel like it has the same value. This breaks away from a direct joules measurement, but it leaves open the possibility of a joules per joules measure. The joules of value a constant-value coin represents is a proportion of the total joules of value in a society.
The size and efficiency of their commodity-producing and delivering infrastructure should stay proportional to that wealth. If the commodity machine starts struggling to meet demand and prices rise, then the coin is deflated to encourage investment in commodity infrastructure while reducing the strain on commodity production. Then there is the converse.
This discourages economic bubbles. Better commodity-based economics enables larger armies to destroy other economies. Democracy subverts capitalism towards higher commodity production instead of concentration of wealth.
This seems to break away from the joules/joules measurement, but I have a out: an ideal currency should not be a direct joules measure as I initially said, nor the proportion. It should be also divided by the number of people. If commodities got scarce, there is the implication there are too many people for the commodity infrastructure. The commodity infrastructure should be valued relative to the number of people. So the coin is restricted with the commodities to make them both retain the same price (to keep contracts valid) and this makes people have less coin (worth less) and thereby work harder for the commodity production.
I've been trying to discover the connection between evolution, economics, and all other adaptive learning "machines" for years. Here are my latest tweets in this effort. I should point out all machines replacing biology are doing so because they are removing oxygen from metal, silicon and carbon "ores" which results in a far lower entropy per mass of the economic machine. Hardness and reliability are deeply connected to lower entropy per mass. There may be a connection between entropy and coin I have not discovered. Gibbs free energy touches upon it because GFE= joules + pressvolume - entropytemperature which is "energy available for work". So I'll consider the possibility that entropy per mass decrease we are witnessing in our evolving economic system should be connected to a reducing quantity of coin per person.
My last tweets:
Local heat/noise fluctuations under gravity/coin constraints discovers greater systemic efficiency when energy comes in & entropy goes out.
Economics, evolution, & learning are closed but not isolated thermodynamic systems. Energy in, entropy out. Entropy per mass reduces.
If you shake a jar of objects that are in a gravity field, the objects will compact. Compaction is a lowering of entropy when all other variables are the same. Entropy is conserved. The excess entropy escapes during the shaking as low-energy black-body radiation photons as a result of a heat increase from the shaking energy and friction. Energy was converted to lower entropy in the jar, plus even more entropy that was released as the heat escaped.

In evolution, the shaking energy is the Sun and moon. I've written on the importance of the moon to life on Earth. The mass on Earth is constrained to the surface which is the jar.

In economics and A.I. energy obviously is coming in from the outside, and heat is escaping. There is also a lowering of entropy per mass as in the jar and evolution. Law and Earth are the jar. Currency is a conveyance of energy between participants in accordance with the constraints. To allow greater compaction in a jar, sometimes you need more room for new positions to be discovered. In A.I. they periodically relax variables so that they can take on different values before they are slowly constricted again. This enables it to get out of non-optimal solutions. There is also a redistribution of wealth at times in A.I. to make sure it does not get stuck in local minimums (the 1% taking over).
The end game of currency will be a trust network where your reputation among friends and past buyers/sellers is the amount of currency you own to purchase things in the future. You can't lose your keys because your reputation is stored on the network. It's not centralized in any way like bitcoin, except for the protocol people should agree on. Complete anonymity is not possible, but only sociopaths don't have any friends and don't deserve any currency. A super-majority of friends can rat you out or give your keys back. You can't exchange with strangers until the network grows tentacles via 6 degrees of separation. You are penalized if a friend cheats and vice versa. You can have multiple identities but it means you would have to split friends among them, not getting any net benefit except fall-back security and dispersion to distant networks. There is no currency except how friends of friends of friends etc choose to score your reputation. There's no profit to being a dev or adopting early. There's huge profit in not being anonymous.
your FB and Amazon "upvotes" would be carried over by FB and Amazon using the correct XML interface. And if they don't, then companies that do adhere to the protocol to open up the data they're collecting to your friends would win out
DNA = blockchain w/ many forks & new genes = new coins (virus-induced?). Genes = environment's currency to economize resources 4 negentropy

Wednesday, July 19, 2017

A P2P cryptocurrency to replace FB, Amazon, Fiat, and Bitcoin.

Posted to HUSH slack. A prelude to this

Here's an idea for a cryptocoin to build upon the timestamp idea I posted a few days ago (again, that does not necessarily use the stars).

People get more coin by having more "friends" (actually, people you know to be distinct individuals). It might be a slightly exponential function to discourage multiple identities. Your individual coin value is worth more to your "local" friends than to "distant" friends. The distance is shorter if you have a larger number of parallel connections through unique routes. A coin between A and D when they are connected through friends like A->B->C->D and A->E->F->D is worth more than if the E in the 2nd route is B or C. But if E is not there (A->F->D) then the distance is shorter. More coin is generated as the network grows. Each transaction is recorded, stored, timestamped, and signed by you and your friends and maybe your friends' friends. Maybe they are the only ones who can see it unencrypted or your get the choice of a privacy level. Higher privacy requirement means people who do not actually know you will trust your coin less. Maybe password recovery and "2-factor" security can be implemented by closest friends. Each transaction has description of item bought/sold so that the network can be searched for product. There is also a review and rating field for both buyer and seller. For every positive review, you must have 1 negative review: you can't give everyone 5 stars like on ebay and high ranking reviewers on Amazon (positive reviewers get better ranking based on people liking them more than it being an honest review). This is a P2P trust system, but there must be a way to do it so that it is not easy tricked, which is the usual complaint and there is a privacy issue. But look at the benefits. Truly P2P. Since it does not use a single blockchain it is infinitely faster and infinitely more secure than the bitcoin blockchain. I know nothing about programming a blockchain, let alone understand it if I created a clone. But I could program this. And if I can program it, then it is secure and definitive enough to be hard-coded by someone more clever and need changing only fast as the underlying crypto standards (about once per 2 decades?)

Obviously the intent is to replace fiat, amazon, and ebay, but it should also replace FB. A transaction could be a payment you make to friends if you want them to look at a photo. The photo would be part of the transaction data. Since only you and your friends store the data, there are no transaction fees other than the cost of your computing devices. Your friends have to like it in order for you to get your money back. LOL, right? But it's definitely needed. We need to step back and be able to generalize the concept of reviews, likes, votes, and products into the concept of a coin. You have a limited amount dictated by the size of the network. The network of friends decides how much you get. They decide if you should get more or less relative power than other friends.

It would not require trust in the way you're thinking. Your reputation via the history of transactions would enable people to trust you. It's like a brand name, another reason for having only 1 identity. Encouraging 1 identity is key to prevent people from creating false identities with a bot in order to get more coin. The trick and difficulty is in preventing false identities in a way that scams the community.

Everyone should have a motivation to link to only real, known friends. That's the trick anf difficulty. I'm using "friend" very loosely. It just needs to be a known person. Like me and you could link to David Mercer and Zookoo, but we can't vouch for each other. That's because David and Zookoo have built up more real social credibility through many years and good work. They have sacrificed some privacy in order to get it. Satoshi could get real enormous credibility through various provable verifications and not even give up privacy, so it's not a given that privacy must be sacrificed. It should be made, if possible, to not give an advantage to people because they are taking a risk in their personal safety.

The system should enable individuals to be safer, stronger, etc while at the same time advancing those who advance the system. So those who help others the most are helped by others the most. "Virtuous feedback". This is evolution, except it should not be forgotten that "help others the most" means "help 2 others who have 4 times the wealth to pay you instead of 4 others with nominal wealth". So it's not necessarily charitably socialistic like people often want for potential very good reasons, but potentially brutally capitalistic, like evolution.

It does not have to be social network, but it does seem likable social people would immediately get more wealth. It's a transaction + reputation + existence network. Your coin quantity is based on reviews others give you for past transactions (social or financial) plus the mere fact that you were able to engage in economic or social activity with others (a measure of the probability of your existence). There have been coins based on trust networks but I have not looked into them. It's just the only way I can think of to solve the big issues. If the algorithm can be done in a simple way, then it's evidence to me that it is the correct way to go. Coins give legal control of other people's time and assets. If you and I are not popular in at least a business sense where people give real money instead of "smiles" and "likes" like your brother, why should society relinquish coin (control) to us? The "smiles" might be in a different category than the coin. I mean you may not be able to buy and sell likes like coin. Likes might need to be like "votes". You would get so many "likes" per day to "vote" on your friends, rather than my previous description of people needing to be "liked" in order to give likes, which is just a constant quantity coin. Or maybe both likes and coin could be both: everyone gets so many likes and coins per day, but they are also able to buy/sell/accumulate them. I have not searched for and thought through a theoretical foundation for determining which of these options is the best. Another idea is that every one would issue their own coin via promises. This is how most money is created. Coin implies a tangible asset with inherent value. But paper currency is usually a debt instrument. "I will buy X from you with a promise to pay you back with Y." Y is a standard measure of value like the 1 hour of laborer's time plus a basket of commodities. Government issues fiat with the promise it buys you the time and effort of its taxpayers because it demands taxes to be paid in that fiat. This is called modern monetary theory.

So China sells us stuff for dollars, and those dollars gives china control of U.S. taxpayers, provided our government keeps its implicit promise to not inflate the fiat to an unexpectedly low value too quickly, which would be a default on its debt. So your "financially popular" existence that is proven by past transactions of fulfilling your debt promises gives you the ability to make larger and larger debt promises. How or if social likes/votes should interact with that I do not yet know. But I believe it should be like democratic capitalism. The sole purpose of votes is to prevent the concentration of wealth, distributing power more evenly. This makes commodity prices lower and gives more mouths to feed, and that enabled big armies, so it overthrew kings, lords, and religions. Then machines enabled a small educated Europe and then U.S. population to gain control of the world.
see that the Ithaca NY local HOUR coins are a simplified version of what I was trying to invent. The things missing are: 1) digitize it 2) enable seamless expansion (exchange rates) to other "local" communities (in other words, "local" would be a continuous expansion from yourself, to your "friends" to the world. "friends" would have a better exchange rate as they are trusted more. "friends" is a bad word: "trusted market participants" is better. So Amazon (at least for me) would get a high beginning trust setting. There would be an algorithm for determining the exchange rate based on how much your trusted connections trust the secondary connections. Then your own history with secondary marketplace connections (such as buying from an Amazon chinese source directly) would increase your trust of them if your exchanges with them have been good. "Trust" aka "history of good reputation" would be the currency (not "friends"). A missing 3) item is the ability to include a review by both buyer and seller next to the history of exchanges. Your history of exchanges are stored in your most highly trusted connections. Future buyers or sellers wanting to interact with you (or you with them) would be able to see your hisotry of transactions. There would be a setting of how private you want to be. If you want to be intensely private, your exchange rate with distant buyers/sellers would not be as good because they can't verify your reputation. "Reputation" is the primary coin and it would be treated like any other asset. But the creation and destruction of the coin would be managed on a system-wide level so that your reputation can be compared to others, so those with least reputation are weeded out via the marketplace. If you give nothing measurable to society, then you would get nothing from it. You can sell your reputation for dollars or whatever. "likes" might be a 1 to 10 integer that goes beside the "review" field that adds or subtracts from your reputation. But giving likes comes at a cost of your own reputation. I have not worked out the details of this. These likes are just like the 10 signatures on the back of script in the Ithaca NY HOURS coins. So I could learn a lot from their 26-year experiment on how to enable it to expand. They need to be in contact with some really good blockchain devs who could implement something like I'm describing. It could be like an explosion emanating from Ithaca NY that changes the world. Proven there, it could pop-up in other places independently but instantly tap into Ithaca via a few extensions of trust. Extension of trust is the creation of a debt and credit, the source of all fiat-like currency. But by managing the total on a system-wide level without a trusted 3rd party prevents it from being like current gov-backed fiat. Some features: your personal blockchain of transactions is not publicly disclosed unless you want. It is also recoverable and reversible if > 50% of your most local trusted sources agree to your request for recovery. So no permanently lost coins. A thief and those who accepted funds from thieves would lose out. But if you get hacked too much, then the reversals hurt your reputation.

zawy [8:59 AM]
There are several crucial good features this has: 1) there's not exactly a single coin, but a continuous spectrum of exchange rates between reputations in keeping with an evolvability 2) security/protection of value via reversibility by local consensus. 3) The local consensus that determines reputation points and reversals can be penalized by the wider market if it has a reputation of being a bad or dumb consensus. 4) It's not a fixed-quantity coin (quantity of coin is determined by the market rather than an arbitrary decision by core devs, under the constraint of a protocol I haven't defined) 5) there is not a central blockchain which has security, privacy, anonymity, and failure problems. 6) the protocol can have various parameters chosen by the user. The user can chooses his reputation coin's characteristics. The wider market will decide how to value that coin. The users decide parameters that determine how to value other's reputation. I might trust chinese manufacturers to send product more than other people. You could decide this by haggling on price, but auto-searching for buyers and sellers needs you to define how you're going to rate potential candidates. Even the protocol has the potential of being changeable (evolvable). 7) OpenBazaar is not needed because it's inherent to the protocol. If you have a history of selling an item and allow your buyers to make it public, then scans of the network reveal you. Certain requirements are needed such not being able to pick and choose which past buyers can reveal past transactions. 8 ) Besides having "cross chain atomic swaps" and openBazaar built-in via a very simple protocol (Even Zcash-level anonymity might be choosable for individual transactions), I think it could also include STEEM and LBRY objectives as well as smart contracts.

9) government would have to bend over backwards to justify taxing your marketplace reputation. Even VAT taxes might have trouble if every reputation credit you issue creates a reputation debit. This could turn bank manipulation of government against both gov and banks: we are not taxed for taking out loans which enables banks to charge more interest. When we buy a house, our signature to promise to repay the debt is an asset on the bank's balance sheet. This enables them to create money out of thin air via the Fed, which is somehow connected to the FED's overnight interest rate. You pay 6%, bank gets 5%, FED get's 1%, or something like that. The rest of the money (your house's value) came from no-where to pay previous owner, and goes back to nowhere as you pay it off, except for the interest you gave to the banks and FED. Our promise to pay it back is the source of the initial money. Banks might be limited in their ability to do this by reserve requirements. Anyway, the system I'm describing makes your local trusted marketplace connections your bank. They are basically issuing credit to relative strangers by vouching for your reputation to repay. Your local network is taking the risk of you not repaying them. You repay the debt to your creditors via future transactions. The amount you buy must equal the amount you sell. Your expenditures equal your income so there is no net income to tax, as long as you do not convert your reputation credit to dollars. You and your local network have no net asset to be taxed. Any net assets you gain for resell are inventory that is not taxed (if less than $10 M)
I do not propose any mining, but local connections validate and record your transactions (including smart contracts). Everyone "mines" by giving more than they receive. Best summary of the idea: By initially trusting people more than your measurement of their reputation justifies, you are loaning trust to the system that the system will pay back to you. So "trust" is the debit side (what you give) and "reputation" (what you receive) is the credit side of your personal balance sheet that the system records on your local "connections" (these are not simple network peers but people with who you have a history of transactions). Let's say I send you a 2 pound bar of tellurium for nothing except to gain reputation points in the system. I need you to be a part of the system and to record the transaction. That still does not benefit my reputation unless you also gain reputation by buying or selling with others. Then those others and myself trust each other's reputation more since we all trust you. A history with them builds trust without you, so you could default out and things not crash. The trick is for the protocol to keep track of things so that it is not tricked by false identities into unjustly increasing or decreasing reputations. There needs to be a pre-existing trust to get it started. The system does not create any trust. It only keeps track of who deserves a credit of trust from past giving of trust and who owes a debt of trust by receiving goods or services or other likes without trusting anyone.
The only way to get a good reputation is to sell goods or services to someone who is not in your network. You get more reputation if you send the goods or services to someone who is not in anyone's network, provided they subsequently add others to their network who are not in yours. This should only add to your reputation after the fact only 1 level and decreases after they've added a few, so it's not a pyramid scheme. The goal is not to reward you for bringing in others, but reward you for making a real sale to a real independent person (not your personal friends who did not receive anything in return) who will use the system on their on. This is the same thing as "burning" something such as human labor (in antiques) or computing resources. Nick Szabo has also stressed the importance of the age of an item and it's history of use as a currency as increasing its value. So the length of time someone has been holding and building reputation without violating trust would add value to their reputation. This causes some added value for early adoption and for sticking with the system. The formulas for calculating reputation need to be derivable by statistical theory or determined by the marketplace.

Saturday, July 15, 2017

Best difficulty algorithm: Zawy v6

This page will not be updated anymore.

See this page for the best difficulty algorithms

# Tom Harold (Degnr8) "wt-144" 
# Modified by Zawy to be Weighted, weighted Harmonic Mean (WWHM)
# Zawy-selected N=30 and timestamp handling for all coins.
# No limits in rise or fall rate should be employed.
# MTP should not be used

# set constants
T=600 # (target solvetime)
adjust=0.98 # 0.98 for N=30
k = (N+1)/2 *adjust * T

# algorithm
d=0, t=0, j=0
for i = height - N+1 to height  # (N most recent blocks)
solvetime = TS[i] - TS[i-1] 
solvetime = 10*T if solvetime > 10*T
solvetime = -9*T if solvetime < -9*T
    t +=  solvetime * j 
    d +=D[i]
next i
t=T if t < T # in case of startup weirdness, keep t reasonable
next_D = d * k / t 
and apparently better and amazing in that there's not even a loop or looking at old data:


# Jacob Eliosoff  EMA (exponential moving average)
# ST = previous solvetime
# N=15 (Zawy-selected)
# MTP should not be used

ST = previous timestamp - timestamp before that
ST = max(T/50,min(T*10, ST))
next_D = previous_D * ( T/ST + e^(-ST/T/N) * (1-T/ST) )

The following is older text. The important stuff is above.
# Zawy v6 difficulty algorithm 
# Newest version of Zawy v1b
# Based on next_diff=average(prev N diff) * TargetInterval / average(prev N solvetimes)
# Thanks to Karbowanec and Sumokoin for supporting, testing, and using.
# (1+0.67/N) keeps the avg solve time at TargetInterval.
# Low N has better response to short attacks, but wider variation in solvetimes. 
# Sudden large 5x on-off hashrate changes with N=12 sometimes has 30x delays verses 
# 20x delays with N=18. But N=12 may lose only 20 bks in 5 attacks verse 30 w/ N=18.
# This allows timestamps to have any value, as long as > 50% of miners are
# approximately correct and as long as timestamps are ALLOWED to 
# be out of order to correct bad timestamps. 
# Miners with >50% can be prevented from driving difficulty down to 1 if
# nodes do like bitcoin and have a median time and forbid blocks to have a timestamp
# more than 2 hours ahead of that time. 
# For discussion and history of all the alternatives that failed: 
# D = difficulty, T=TargetInterval, TS=TimeStamp, ST=solveTime

N=16;  # Averaging window. Can conceivably be any N>6.  N=16 seems good for small coins.
X=6;  # Size of expected "hash attacks" as multiple of avg hashrate. X=6 for new small coins.

# An X too small is unresponsive. X too large is subject to timestamp manipulation.
# The following is how X is used.

limit=X^(2/N); # Protect against timestamp error. Limits avg_ST and thereby next_D.

# Instead of X and limit, there can be a limit on the individual TS's in relation 
# to previous block like this:
# R=6; # multiple of T that timestamp can be from expected time relative to previous TS.
# Then nodes enforce that the most recent block have a TS:
# TS = TS_previous_block +T+ R*T if TS > TS_previous_block +T+ R*T;
# TS = TS_previous_block +T-R*T if TS < TS_previous_block +T - R*T;
adjust = 1/(1+0.67/N); # Keeps correct avg solvetime.

# get next difficulty

ST=0; D=0; 
for ( i=height;  i > height-N;  i--) {  # go through N most recent blocks
   # Note: TS's mark beginning of blocks, so the ST's below are shifted back 1
   # block from the D for that ST, but it does not cause a problem.
   ST += TS[i] - TS[i-1] ; # Note:  ST != TS
   D += D[i];
ST = T*limit if ST > T*limit; 
ST = T/limit if ST < T/limit; 

next_D = D * T / ST * adjust;   

# It is less accurate to use the following, even though it looks like the N's divide out:
# next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs];

=============== post to Bitcoing Gold github: That was Digishield's reasoning. In reading the history of the Digishield development, it gives the impression the asymmetry caused problems, so they added the "tempering" to "fix" it, maybe not realizing this fix was just making it so slow the 32/16 became irrelevant. Either way, the main problem is the opposite: not returning to normal difficulty fast enough after a big hash miner leaves, causing long delays between blocks. Bitcoin Cash tried to solve this by doing the reverse asymmetry of dropping a LOT faster than it rises. This has caused oscillations and issuing coins too fast, and a few blocks every 2 cycles with really long delays. Asymmetry in the allowed rise and fall will change how fast coins are issued at the least, requiring an adjustment factor. Rising fast protects your constant miners, although if a large miners come on and off at the right times and have a bigger coin to always return for a base profit, they can always get 1/3 of the coins issued at "zero excess cost" in difficulty (the difficulty algo was not rising fast enough to adjust to the increase in hashrate). The only thing that can help is to have a shorter averaging window to respond faster, but it turns out this also allows more frequent accidental drops in difficulty and if they simply attack more often for shorter periods, they can still get 1/3 of the block for "zero excess cost". Approximately, they just need to attack for 1/2 a window averaging period and stay off the next full averaging period, or just choosing when difficulty seems a little low on accident. Dropping fast prevents a lot of long-delay blocks after an attack and prevents your constant miners from suffering a long period of high difficulty. By leaving in the +/- 16% limit I am only trying to prevent catastrophic attacks on the timestamp. For example, if the code keeps bitcoin's node-enforced 2 hour limit on how far forward miners assign timestamps, and if a pool has >50% hashrate, then after a few blocks they would "own" the MTP (median time past) and can set it to 2 hours ahead of time (12 blocks). Zcash will likely reduce this to 900 seconds which is close to the 1000 seconds I recommended before they launched a year ago. Their current limit might be 3600 seconds. It appears BTCG copied Zcash's difficulty code. It should be kept in mind Zcash is 2.5 minute blocks, so if BTCG is using a stricter time limit than BTC like Zcash, it should not go below 3600 seconds. Zcash can do a 900 second limit because that is 6 blocks for them. An equivalent time in BTCG is 3600 seconds. With N=40 like I've proposed, the 2-hour limit would allow a miner with 10x the normal hashrate to make the difficulty think it needs to drop to 40/(40+12) = 77% of correct difficulty when they begin to own MTP. After 12 blocks difficulty would be low by ```40^12 / [(40+12)*(40+11)*(40+10)*(40+9)*.... = 17%``` of the normal difficulty which is only 1.7% of the correct difficulty if they have 10x the normal hashrate. By limiting the drop to 16% per block, difficulty will get down to 43% instead of 17%. A tighter limit of +/- 12% instead of 16% may be good (69% would be the low). This is with bitcoin's 2-hour limit. I think BTCG has copied Zcash so maybe it is reduced to 1 hours. The +/- 12% is stricter than a 1 hour limit, so changing from 2 hour to 1 hour will help at a limit like +/- 16%, but not make a different at +/- 12%. A 1 hour limit on time with no other limit would allow a timestamp attacker to get difficulty down to 61% which is why I said the +/- 12% in allowing 69% drop is stricter (better). The two don't combine to help. Using the MTP like Zcash and probably BTCG does prevents < 50% miners from manipulating the timestamp. But it makes the difficulty 5 blocks slower in responding. There is a fix to this that would require more code changes. See my [Zawy v6]( I'll show the +/- 12% (or 16%) does not prevent the N=40 from responding as fast as it can. ( I'm going to edit my previous post to recommend 12% instead of keeping the 16%. ) Let's say an attack has 10x the normal hashrate. With N=40, the avg time it takes the difficulty to completely respond to meet the challenge is 40 blocks. So it will rise, on avg, this much per block: 10^(1/40). In my testing, it appeared a limit on the rise equal to 10^(2/40) = 12.2% was only reached about 10% of the time. I don't expect BTGC to experience a 10x "attack" very often so 12% with N=40 seems correct. Another way to reduce the effect of timestamp manipulation is to limit how far the next timestamp can be from previous timestamp. I've found a good choice to be +/- 6*T from where you expected the solve to occur and where T = 600 seconds for BTCG. You expect the solve to be 600 seconds from previous timestamp, so you would limit the timestamps to 600 +/- 3600 the previous timestamp. This allows timestamps to be out of order which is important in Zawy v6, but if BTCG does like Zcash and uses the MTP protection / delay AND the nodes are enforcing the +3600 limit based on real time instead of comparing to the previous timestamp, then you can set the minimum to 1 second after the previous timestamp. Otherwise, without the nodes enforcing a real UST time limit, a miner with with >20% hashrate could drive difficulty to "0" in a few hours or days if a "negative" timestamp from previous one is not allowed even if using MTP and a 3600 forward time limit.

Without nodes enforcing real time and letting miners set the time, any >50% attacker can drive difficulty to zero with any algorithm. BTW if you have a real time available to nodes, you do not need consensus (i.e. POW mining) because you could create a synchronous deterministic network which does not have the Byzantine of FLP problems.

The +/- 6*T limit works out to be about the same as the 10^(2/N) limit. They overlap, so it is not an additive benefit.

I tried many different schemes for difficulty such as a dynamic averaging window, least squares fitting, and most-recent-block-more-heavily-weighted. Nothing worked better than simple:
```next_D=avg(past N D) * T / avg(past N solvetimes) / (1+0.67/N)```
with the two options for solvetime limits above (+/- 3600 on each solvetime or X^(2/N) and X^(-2/N) on the average, where X is the expected max hash attack size as a multiple of baseline hashrate). The (1+0.67/N). Note that ```next_D= sum(N D's) * T / [max timestamp - min timestamp]``` as is usually used is not as accurate if timestamps are being manipulated. The implied N's in the denominator of my averages will not cancel during a manipulation as this alternative equation assumes .

Difficulty has a seductive illusion of being "improvable". Any "fix" that tries to predict attacker behavior without employing a symmetrical "fix" to counter him acting exactly the opposite (and everywhere in between) will leave an exploitable hole or cause an undesirable side effect. Any fix that is symmetrical is limited in scope before it has undesirable side effects. We want fast response to changes in hashrate and a smooth difficulty when hashrate is constant. My best theoretical approach was a dynamic averaging window in Zawy v2 that triggers on various measures detecting a change in hashrate. For complex reasons, this still does not do better than simple average.

post to Zcash github:

Any upper limit you apply to timestamps should be reflected in a lower limit. For example, you could follow the rule that the next timestamp is limited to +/- 750 seconds from the previous timestamp +150 seconds (+900 / -600). If you don't allow the "negative" timestamp (-600 from previous timestamp) AND if miners can assign timestamps without a real-time limit from nodes, then a miner or pool with > 20% of the network hashrate can drive the difficulty as low as he wants, letting everyone get blocks as fast as he wants, in less than a day.

A symmetrical limit on timestamps allows honest miner timestamps to completely erase the effect of bad timestamps. ( You do not need to wait 6 blocks for MTP like Zcash does in delaying the use of timestamps for difficulty, see footnote. ) If you allow the symmetrical "negative" timestamps, you do not need nodes to have the correct time with NTP or GPS unless miners collude with > 51% agreement on setting the timestamps further and further ahead of time to drive difficulty down. It's a real possibility if miners decide they do not like a certain fork due to not providing them with enough fees.

But if you do not allow the apparently negative solvetimes, you better do like ETH and depend on 3rd parties for your node times in order to limit how low a timestamp manipulator can drive your difficulty.

But if your nodes have an accurate time, you do not need mining. The only fundamental reason for mining is to act as a timestamp server to prevent double spending. If you have an accurate time on all nodes, then you can make it a synchronous network to eliminate the need for consensus to eliminate the need for byzantine protection via POW.

BTC and ETH depend on nodes to limit the future time assigned to blocks. Zooko was the only one here who seemed to know there is something wrong about strong reliance on nodes having the correct time. The extent to which BTC and ETH need those forward-time limits to be enforced by real time is the extent to which they do not need mining.

MTP does not stop a 25% attacker who can set timestamps > 4 blocks ahead if other miners are not allowed to assign a "negative" timestamp to eliminate the error in the next block. But if you allow the "negatives" then MTP is not needed. Putting your tempering aside, this assumes you use

next_D = avg(D's) * T / avg(solvetimes, allowing negative solvetime)
instead of

next_D=sum(D's) * T / [max(Timestamps) - min(Timestamps) ]
because the N's of the denominator and number of the first equation do not cancel like you would think and hope (in order to use the second equation) when there are bad timestamps at the beginning and end of the window. With the MTP, your difficulty is delayed 5 blocks in responding to big ETH miners who jump on about twice a day. That's like a gift to them at the expense of your constant miners.

Also, your tempered N=17 gives almost the same results as a straight average N=63. I would use N=40 instead, without the tempering. It should reduce the cheap blocks the big ETH miners are getting.

Your 16% / 32% limits are rarely reached due to the N=63 slowness. This is good because it is a symmetry problem, although it would not be as bad as BCH. Use "limit" and "1/limit" where limit = X^(2/N) where N=63 for your current tempering and X = the size of the larger ETH attackers as a fraction of your total hashrate, which is about 3. This allows the the fastest response up or down at N for a given X with 80% probability. Change the 2 to 3 to get a higher probability of an adequately-fast response. The benefit is that it is a really loose timestamp limit on individual values, as long as the aggregate is not too far from the expected range.

Monday, July 10, 2017

Doing better than the simple average in cryptocoin difficulty algorithms

I am still trying to find a better method than the simple avg, but I have not found one yet. I am pretty sure there is one because estimates of hashrate based on avg(D1/T2 + D2/T2 + ....) should be better than avg(D)/avg(T) if there is any change in the hashrate during the averaging period. This is because avg(D)/avg(T) throws out details that exist in the data measuring hashrate. We are not exactly interested in avg(D) or avg(T). We are interested in avg(D/T). The avg(D/T) method does not throw out details. Statistical measures throw out details. You don't want to lose the details until the variable of interest has been directly measured. I learned this the hard way on an engineering project. But avg(D/T) does not hardly work at all in this case. The problem is that the probability distribution of each data point D/T needs to be symmetrical on each side of the mean (above and below it). I'm trying to "map" the measured D/T values based on their probability of occurrence so that they become symmetrical, then take the average, then un-map the average to get the correct avg(D/T). I've had some success, but it's not as good as the average. This is because I can't seem to map it correctly. If I could do it, then another improvement becomes possible: the least squares method of linear curve fitting could be used on the mapped D/T values to predict where the next data point should be. All this might result in a 20% improvement over the basic average. Going further, sudden on and off hashing will not be detected very well by least squares. Least squares could be the default method, but it could switch to a step-function curve-fit if a step-change is detected. I just wanted to say where I'm at and give an idea to those who might be able to go further than I've been able to.

Numenta's CLA needs 6 layers to model objects

posted to numenta forum
Back when there were only 2 white papers and a few videos I became interested in the HTM and saw a video of a 2D helicopter being detected and wondered about the relation between the layers they used and the ability to recognize objects. I remembered 6 equations with 6 unknowns (the degrees of freedom) are required to solve the dynamics of 3D rotation and translation. The layers of the helicopter HTM matched with what it was able to detect if they were unknowingly being used in a subtle 2-equations and 2 unknowns methodology. Of course this begs the question "Are the 6 layers in the cortex required to see the 3D world?" Numenta's view of the cortical column implies that the 6 layers have nothing to do with this but I would like to question that view. Jeff has also warned against pursuing the reverse black hole question no one has ever escaped: "Is the 3D world the result of a 6-layered brain?" But an understanding of the relation between mass and space-time prevents me from abandoning the reverse question. More importantly, physics has an elephant in the room that is rarely acknowledged and questioned: the only integers that appear in physics are the result of 3D spacetime and Feynman states no fundamental aspect of QED requires an extension beyond 1D. QED is sort of the core of all physics except for gravity and nuclear stuff. An expert in the area informed me that spin is what creates 3D space, so my line of questioning is suspect. But my view is that we may have invented spin to maintain the view that objects are independent of our perceptions. I admit I am immediately deep in a recursive black hole: the 6 layers is a mass of neurons that I'm proposing we can see only because we have the 6 layers. BTW, if we had 10 layers to support the perception of 4D objects in 4D space then I believe all velocities would be static positions and all accelerations would be velocities. instead of E + mc^2 = 0 we would have E+mc^3=0 (now really getting side-tracked on the physics: by keeping relativity units correct there is a missing negative in some equations. Another example is F+ma=0 where the "F" is more correctly defined as the reactive force of the object which is in the opposite direction of the "a". This comes from meters=i*c*seconds which comes from Einstein's "Relativity" appendix 2 which he stated allows use of Euclidean instead of Minkowski space-time which is in keeping with the Occam's razor requirement.)

What I'm suggesting is falsifiable. Others posting here will know if it takes 6 layers to fully recognized objects in 4D space time. The degrees of freedom is N translational plus N(N-1)/2 rotational. I tried testing the theory via observation and thought of ants. It seems to be supported there: their eyes that need to detect only 2D "shadows and light" without rotation have roughly two layers. And yet their feelers and front legs, having to deal with 3D objects in 3D space, have 6 layers. There's a great extension to this observation: wasps are the closest cousins to the ants and have 6 layers for their eyes.

I posted this question nearly a decade ago in the old forum, but I'll ask again. Is a 6 layer HTM required for fully characterizing 3D objects in 4D space-time?
I think a single layer would require a lot more new training on every object. For example, it sees a circle moving about and learns its behavior. Then it turns sideways and turns out to be a cylinder, and then it starts rotating, so training has to start over. I don't think it could conceive very well "this is the same object" and/or generalize the lessons learned on past objects to future objects. It just seems like it would have difficulty understanding objects like we do. I believe 6 layers would be able to perceive the laws of dynamics but 1 layer would not. These six layers are not an HTM but the foundation of a single cortical column. Each CLA layer of the HTM would require the 6 layers. So the CLA would need to be redone if you want it to think like mammals and see like wasps. The motor control of layer (5th layer of cortex) may serve may also serve part of this "inherent object modelling", not just motor control. The motor control part might be crucial to developing the concept of inertia (mass). Mass is another variable ("dimension") which implies 7 layers should be present. To get out of that mathematical corner, I have to conjecture mass is something special in the modelling like "the higher dimensions that 6 layers can't model and that have permanence".

I do not mean to say that 6 layers is necessarily inherently needed in A.I. to be superior to humans even in the realm of understanding physics, but that it is needed to think more directly like animals. But if 6 layers per HTM layer is actaully needed for a higher intelligence, then 10 layers to do 4D space should be even more powerful. 15 layers are needed for 5D. I do not accept the conjecture that objective reality, if there is one, depends on a specific integer of spatial dimensions like "3".

The visual cortex by itself with its 6 layers does not seem to have any concept of objects, but I think the 6 layers are still needed for encoding the information so that the concept of the objects is still extractable by the higher levels in the "HTM" of the brain (e.g. frontal lobes). But the concept of an object seems to be possible in the 6 layers just "behind" the eyes of flying insects: wasps certainly have a better concept of the object nature of people than ants, judging by the way they identify and attack. Ants are virtually blind to what people are, except for detecting skin and biting.

Saturday, July 8, 2017

Stars as cryptocoin oracles: posts to HUSH cryptocoin slack

Note: ethereum time syncs with Nodes (mining or not) must have an accurate time to sync with network. Miners need accurate time so later blocks will build upon theirs. But there is no distinct rule on timestamps in ETH except that it must be after previous timestamp.

pools with >51% can get all the coins they want from small alt coins in a few hours, dropping the difficulty at the rate of next D = previous avg D x [1/(1+M/N)]^(2X-1) where X is percent of hash power, N is the number of blocks in the rolling average, and M is the coin's limit on how far the timestamp can be forwarded. If GPS isn't good enough, the only solution I can think of is to tie miners and/or nodes to the stars with an app on their smartphone to get a periodic observation of the stars to calibrate their clock. But then it begs the question (via the BTC white paper) of why mining would still be needed.
I think the point of mining was to solve the double-spending problem without relying on a 3rd-party timestamp. Satoshi seems to say this explicitly in the whitepaper. It also finances the growth of the network in a way that supports transactions, but I do not understand why non-mining nodes seem to be necessary to keep miners in check and/or why mining often has the feel of a necessary evil, if the entire point of financing mining was to build a working network. With a valid clock on each peer, the double spending problem seems solved without mining. It leaves the question of how to release the coins in a way that supports the network. But if the timestamp problem is solved by each peer using the stars as his clock, is there any need for a behemoth network using might is right to determine the time and thereby coin emission rate? It might be that peers with valid clocks who only want a wallet and to conduct transactions could be all that is needed reaching the ideal of not having any centralized miners or developers and absolutely evenly distributed among everyone. There might be a way to distribute the blockchain so that they do not all need the entire chain. It would have a statistical chance of forking (fracturing with all forks being valid but increasingly incompatible) which could be increased by hacking, but that would only result as the need for the network grew (via more marketplace transactions). So the fracturing might be beneficial by keeping the ideal of constant value. That is a requirement of all good currencies: constant quantity is the ideal asset, not currency. Constant quantity was always a disaster for all currencies that have ever been used because it's a bonanza for the 1% such as us, the early adopters seeking to profit without working for it, extracting wealth from late-adopters. In any event it would get rid of centralized developers and centralized mining. It might be as simple as PGP so that a requirement for a transaction to be valid is that the code never changes. Or maybe any code on any machine would be valid as long as other peers confirm your outputs are valid for your inputs as specified by a non-changing protocol.
by "fracturing" I introduced vagueness to mean "something that is probably not unlike forking". I am speaking of big picture ideas as I have no knowledge of BTC details. I took a strong renewed interest in difficulty algorithms after two cryptonote coins adopted my difficulty algorithm (block averaging instead of medians for 17 blocks with appropriate timestamp limits) to gain protection against attacks. Cryptonote insanely is (or was) using 300 blocks as the averaging window so sumokoin and karbowanek had to fork and start using mine. Zcash changed their digishield v3 as a result of my pestering but did not follow me exactly like these other coins. I posted too much and made a big mistake. I'm side-tracked: an unavoidable problem in the difficulty algorithm lead me back to the Satoshi white paper and the idea that scientific observation of stars could be the beginning of "real" cryptocurrencies as it was for physics. The stars would be the first valid, provable, non-3rd party oracle in cryptocoins.
With only +/-2 degree accuracy I figure 10 minute blocks are OK. 2 degrees is 4 minutes if you look at stars 90 degrees to the north star. So local peers have to agree on the time +/4 minutes with 1 minute to spare on each end. Russia also has a GPS system but I don't think the combination of the two solves anything.
You are saying I'm missing the "might is right" aspect. But the idea is that it replaces "might is right" with an objective verifiable truth that can be checked by any and all peers at all present and future times.
I think everyone could reject the transaction if it does not have the correct timestamp. He can lie about it, but it will be rejected. He can send the same coin twice in the same 8 minute window, but everyone is supposed to rejected both sends. I previously mentioned maybe all the peers do not need a full chain, but that's probably a pretty wrong-headed idea.
Having 1 miner timestamp a block is a lot more important than having the correct time. But if a correct time is agreed upon, then every peer everywhere receives and validates every transaction independently. Because of the inaccuracy of the timestamps, the timestamps are rounded to the nearest minute that has 0 as the right hand digit, and you have +/- 2 minutes from the next "5" minute to send a transaction. But I must be missing something. It seems like using star gazing, GPS, or timestamp servers is not necessary: you would just need to make sure your peer's computing device has approximately the correct system time for global time.
I gave solution that doesn't even need an app that calibrates with the stars: if everyone manually makes sure their clock is +/- 2 minutes correct, and if transactions can propagate to everyone in 2 minutes, then let's say the blockchain is updated every minute that ends in "0". The blockchain would be updated by EVERYONE. There are no nodes or miners needed or wanted in this design, especially since we want it nuclear bomb proof, unlike the current bitcoin with concentrated miners and devs. Everyone would send out their transactions with their timestamp at minutes ending in "4", so with error, they may be sending them out right after "2" up until "6". If there is a 0 to 2 minute propagation delay, everyone's going to receive each other's transactions between "2" and "8" by their own clock (between 4 and 6 by "star time" or by whatever clock each peer has decided by himself to must not be coded into the client as a default unless it is watching the stars). On minute 8, every client closes his ears to every transaction. So nothing is happening on any client anywhere between 8 and 2 except verifying and adding transactions to the chain, which should work even if their clock is in error by +/- 2 minutes. Clients with a -2 minute error clock and those with a +2 minute error clock should see the exact same set of transactions, or someone is giving a mixed message to clients on accident or on purpose by going outside it's own allowed window. On accident would mean some transactions were missed on some clients. On purpose would be someone trying to spend on -2 minute clients the same coin he is also trying to spend on an +2 minute client. In both cases, it seems like clients could check each other and decide to throw both erring transactions out. So that's my proposal. If it's possible to implement, then as far as I know it's only 1 of 3 known ways. The first is a traditional database that has a single reference location for its core data so there are no "double conflicting updates" on the same record. In the case of more than 1 core location and backups, I believe they have advanced methods of checking for conflicts and then undoing "transactions" in order to correct the problem. The 2nd is Satoshi's method.