Thursday, February 28, 2019

Consistent value in various contexts is the source of money's properties

An ideal money has the same value in all relevant contexts or "dimensions".  The numerous properties ascribed to money are just referring those contexts.

The purpose of a thing is more fundamental to defining it than its properties. For example, ask a person where the chair is in a picture of a forest and he'll know it's the log or stump, but an A.I. won't be able to find legs or a back. Consider the purposes of money authors have mentioned:
  • Medium of exchange
  • Unit of account
  • Store of value
Less frequently mentioned:
  • Deferred payment (a unit of debit or credit)
  • Legal tender (e.g. a unit of account in contracts)
Value is inherent to all of these, and stability in value is obviously also important. If you add "consistent" or "stable" before them it makes sense and sounds idealistic or even redundant.

Here are 15 properties I was able to find, taking the liberty of adding the word "value":
  • Stable value in time
  • Stable value in different locations
  • Divisible value
  • Fungible value (aka "Uniform")
  • Portable value
  • Durable value
  • Acceptable value (aka "Convenient")
  • Trustworthy value ( aka "Confidence")
  • Liquid value (this is vague and encompasses most of the others)
"Consistent value in every way" seems to be an accurate summary. I found two properties which are kind of oblique or re-enforce the others..
  • Limited in Supply  (re-enforces stable value and trustworthy value)
  • Long history of acceptable value (re-enforces trustworthy / confidence in value)
There is another property:
  • Has value in itself
This might be a circular reference, or it breaks money out of a different circular reference "money has value because we agree it has value". This property is saying it should have value because we can use it for something besides exchange. It refers to something like copper, silver, food or vodka (a unit of exchange when the USSR was falling apart). Coins have had this property off and on. For maybe 2 or 3 decades, the copper in a penny was worth about a penny. Then there are silver and gold coins.  So the trades in these types of money are also barter.

Barter, energy, and cryptocurrencies
Continuing on about this final property: it always has taken a lot of energy to get silver and gold. Similarly, POW cryptocoins waste energy to "prove their worth". But the worth in metals is also like stored energy (literally, metals can be burned to get a lot of energy out, but being able to use them saves energy). Especially silver: it's biggest use right now is in solar cells. Buying silver is akin to buying potential energy.  The "inherent" value in a barter-type money is the amount of economic "energy" (possibly literally) it can produce or save, but all the other properties only demand that the "value" is the amount of energy it can control through mutual agreement.  If you could bottle up electrical energy in different quantities that could be easily extracted by anyone and could transfer it over the internet, that would probably be the perfect money.

Importance of stable value to contracts
Contracts (including wages and prices) are just an agreement between economic players. In order for an economic system to be intelligent, it seems a constant value is as important as keeping the definition of a kg of wheat constant.

Currency quantity should track GDP
If the "real" GDP of the currency being used increases, then the amount of currency in circulation must increase in order to maintain stable value. This is if the GDP is increasing from the economy getting more efficient, or if production increases, or if the currency is being demanded by previously "external" economic actors like the rest of the world increasingly using your currency. GDP increases from simply printing more currency (inflation) has to be subtracted from the "real" GDP.  If the real GDP is trying to grow and the currency is not increased with it, it slows the growth rate by strangling trade. Increasing the amount of currency ahead of time can help the GDP to grow, but if too much currency is produced, inefficient decisions are made with the excess currency, leading to a future reduction in GDP.  For example asset prices can artificially rise while inflation is kept low so it can seem like everything is fine, but this leads to a boom-bust cycle in assets.

The "real" GDP can be viewed as a net energy that is acquired and used over time. It is used to sustain (maintain) and increase itself (the economy). But the net energy is not necessarily physical joules (or how efficiently they are used, hence "net"). We may place higher value on things that can't be measured with physical energy. For example, we may print more money to increase the apparent GDP (since the money quantity is higher) that actually reduces "real" (joule-based) GDP. An example of this is wanting an even distribution of joule-based wealth more than total joule-based wealth. In other words "efficient" use of the joules may not be a physical conversion efficiency. But I will assume "real" GDP refers to net work energy in joules.

To keep constant value the quantity of the currency needs to be in proportion to the amount of power (energy per time) the infrastructure can produce, provided the currency's velocity (turnover rate) is constant. So the quantity of money divided by the time it takes the money to "turnover" (its 1/velocity) should remain proportional to the productive power of the infrastructure, which indicates the currency is in units of joules. That is, (money qty)*(velocity) = (net work energy in joules) / (time). But since constant value depends on (money qty)*(velocity) it does not strictly connect money to constant value as in coins with inherent value. The solution is to make money proportional to the infrastructure that creates the GDP. That infrastructure is an engine that has a net work output per time.  It took energy to create the infrastructure, so it's like a potential energy. So money can retain units of joules like the infrastructure and yet be directly connected to a joules/time.

The amount of currency in circulation should "lead" that power. For example, if a new discovery is going to increase efficiency and needs a large capital investment, an amount of currency needs to be created immediately in proportion to the expected benefits of the discovery and loaned to those who will profit from the discovery.  If the discovery increases real GDP as expected and thereby the loaned (created) money is repaid, the issuing authority (like a government) can spend it without inflation. If the venture fails and it's not repaid, there is inflation. Doing it this way pulls marginally unemployed infrastructure into action and/or causing slight temporary inflation that "steals" relative power from other sectors to get the discovery up and going quickly. Intellectual property, culture, and resource depletion affect the efficiency of the infrastructure's production and the efficiency of its use, so knowing the changes in the "power" for the purpose of increasing or decreasing the currency to keep constant value is not easy. We can make an initial error in estimating the true watts of production for the purpose of determining the amount of coin to issue, but it's OK is we are consistent in that error consistent (initial accuracy can be bad, but long term precision should be good). We only need to know that the amount of coin is staying proportional to the power of production, provided the velocity has not changed.  "Net work energy" is clearly defined in physics but we may not want to turn the net work output of our GDP infrastructure into fun heat energy. Evolution indicates we "want" to create more infrastructure that will capture more energy in the future to build more sustainable infrastructure, more quickly. A currency-issuing authority that guides its market in that direction the best is the one who will have the dominant currency. We might want more fun heat energy, but in the end the infrastructure that seeks to expand itself will dominate, pushing for a currency issuing authority that assists it in controlling assets (including people) to this end, eliminating liabilities (including people) along the way. China's rise and strict control of trade and currency is not an accident. USSR's fall in 1989 was a wake up call that economics is important, causing them to intelligently guide macroeconomics. The square caused the government to fear its people which is the opposite of the U.S. government which acts with ignorant impunity as a result of the wealth that resulted from winning the currency war. We've printed an excess for free foreign labor as fast as the increasing world GDP could absorb it, greatly slowing inflation, but reducing our own infrastructure.

A lot of currency is created as banks follow rules set out by governments to create it out of thin air using the asset and the credit-worthy borrower's promise to repay as assets in the banks books that offset the thin-air money.

Economics as an A.I.
Economic systems economize limited resources with competing (evolving) agents. Part of programming interacting A.I. agents is to create a currency that gives access to CPU time and memory space (I'll assume CPU time is primary concern). The quantity of the currency turnover per time must be proportional to CPU calculations per time. Each calculation requires energy and expansion of the A.I. system would mean gaining access to (creating or stealing) more CPUs (infrastructure). So a perfect parallel can be made between a specific type of A.I. and economics.

Slow inflation may be practical, violating constant value
How to increase and decrease the quantity of currency to assist the survival and expansion of the infrastructure is not obvious. It may be necessary to violate constant value. For example, there's a long history of erasing past debts as a way remove the "1%" from having too much power (see Michael Hudson's "The Lost Tradition of Biblical Debt Cancellations").  A 2% annual inflation puts pressure on large holders of the currency to invest the capital in the economy directly or via loans, or lose their value if they don't. 

Monday, February 18, 2019

The Problem with Avalanche (BCH & Ava)

[

update #3.  Here's my rant in a comment to their Sept 26, 2019 dev meeting

Avalanche is not a consensus mechanism for two related reasons: it does not quantify the voting population or detect network partitions. Not having Sybil or eclipse protection is not as big of a problem.   It proves consensus only among its peers without knowing what the wider network thinks, even if it has Sybil & eclipse protection.  It does not meet the "agreement" requirement mentioned in Wikipedia to be called a consensus mechanism. See Leslie Lamport's requirements for consensus and Coda Hale's "You Can’t Sacrifice Partition Tolerance" as an example of a researcher getting exasperated with people calling algorithms like Avalanche a consensus mechanism.  Nakamoto consensus was Earth-shattering in its ability to get consensus in a distributed permissionless setting with Sybil, Eclipse, and partition resolution (not just detection via slow solvetimes). VDF-POS is the only alternative (POS alone requires more excessive bandwidth as centralization & permission are increased). If you find something better like centralized staking for post-consensus, then you do not need Nakamoto consensus because you're unconsciously doing POS where Avalanche gets fast "consensus" at the cost of ruining partition detection which means you must let a real consensus mechanism override it.  I bugged deadalnix & Emin about this 9 months ago and their position is "partitions are rare". That's true, but if you get Sybil protection, you can still have an eclipse problem and more importantly it means it must not be the final say in consensus. Even if you let POW override it, when you get close to something working you'll realize a simpler semi-centralized technique using classical consensus will work better because Avalanche is only useful for quickly resolving the opinion of a large set of voters. If you have a large set, your Sybil protection is going to require a lot of communication. A Sybil solution may also contain  partition & eclipse protection, but keep in mind this conjecture: you can't carry Sybil etc protection over to the speed of Avalanche in a way that maintains the combination of protection, speed, and a level of decentralization that exceed POS + classical methods. Maybe there is a reason the Avalanche researchers want to be anonymous. They are clearly well-published, so why hide when publishing this?  If you use a smaller set of voters, I think you'll find a better solution such as semi-centralized mempools using classical consensus to prevent double spends. So merchants would trust the mempools for small-valued txns but realize they are not a guarantee like the actual blocks. If all nodes agreeing on individual txns are used, then Avalanche can be used, but it's only a suggestion for merchants and miners. To avoid full-blown POS that makes the consensus part of POW pointless, you would just not worry about Sybil protection, so POW would retain the right to over-rule the centralized mempool or node-based Avalanche.  The potential for preventing 51% attacks can only be achieved if you are basically subtly switching to a POS coin, not using the POW as consensus.  It makes no sense to keep Nakamoto consensus if you're going to overrule it with pre- and post- consensus.  You can just use POW in self-hashing txs to generate and distribute coin and just throw Nakamoto consensus out the window. VDF-POS as I've described is the only other option.

If you want fast consensus that maintains Nakamoto consensus, use a DAG.  See the issue in my github for how to do a DAG.


]

[
 Update #2. I recently learned BCH may allow past 100 block winners to be a committee to participate in Avalanche to confirm txs. This is to provide Sybil protection which was my main complaint below.  However, Avalanche's main benefit in speed with little communication by sampling only your peers, hoping they are connected to a much larger network. With only 100 blocks (and maybe only 20 actual distinct mines or pools in the committee), there seems to be little to no advantage over classical consensus methods which will have the advantage of proving consensus instead of hoping for it. Avalanche is not a consensus mechanism because it does not prove agreement between all non-faulty nodes (see Wikipedia). It can't know if a majority consensus has been reached because it does not quantify membership participation. It does not know if the network is split (as in a DoS or eclipse attack that can be combined with a double spend) with each side giving different results. All it knows is if your immediate peers agree. See this tweet thread for more of my more recent comments on Avalanche and 0-conf
https://twitter.com/zawy3/status/1174006755925417986

]



[ update #1:  I believe BCH and Ava are using Avalanche advantageously in this way: if the recipient is confident there is not a 51% attack or network partition in progress, then he can be a sure double spend will not be allowed.  But it invites 33% Sybil attacks on nodes (either locally or globally) to trick nodes into pre-approving txns that POW will have to overturn. The difference between a DAG and Avalanche is that a DAG measures network hashrate integrity by having lots of blocks solved quickly. Neither avoids POW's 51% problem.  ]

The problem with Avalanche is that it assumes a high level of node participation (the "membership" must not change too much, section 3.7).  So there's no protection against network partitions. It assumes the network remains largely intact and does not say what happens when the minority side comes to a different conclusion. There's no mechanism to tell which is the larger side. The authors said they would address this in a later paper, which is harder to do than Avalanche itself. It achieves Consistency and Availability but assumes there is no network Partition. Ava and BCH said partitions are not part of the real world, but if they were not a big issue,  Nakamoto (POW) consensus did not need inventing.

POW's magic is in selecting chain history with the least sum of partitions (via highest cumulative work) with only one voting (hashrate) member (miner) per election (block) needing to communicate that he won, and everyone immediately agreeing without even communicating an acknowledgment. The next vote begins without any other communication. The size of the voting membership (hashrate) is also determined from those single winning announcements, which set a variable for the next election to get an accurate average block time.  It's an amazing achievement. An enormous amount of communication overhead is avoided by making the voters work. No membership list is needed because POW does not prove there was no partition. It only proves the chain had the route of least partitions, assuming a 51% attack has not or will not occur.

If there is a network partition with Avalanche that coincides with conflicting spends on each side of the partition, the network is permanently forked. There's no mechanism to tell nodes which fork is correct unless it defaults back to POW. But if it defaults back to POW, the hidden chain's double-spends will overwrite the Avalanche-approved txns.  BCH said their implementation will allow miners to include txns that Avalanche has not voted on (and not required to include Avalanche-approved txns...both as a way to claim POS is not superseding POW).  This means there is no protection against a double spend because the attacker only needs to get one block on the public chain to include txns that did not receive Avalanche approval, paving the way for double-spends on the hidden chain.

I've tried to come up with ways to "repair" Avalanche with membership metrics that will enable it to detect network partitions. Fast finality, if not all basic POS, requires proof of sufficient network integrity. If centralization is to be avoided, the nodes must independently conclude the necessary percentage of voting members are known to be participating. This is not trivial. I assume Casper and Dfinity are solving this problem in complicated (suspect) ways.  I'm attempting my own design in a future post.