Tuesday, March 29, 2016

Entropy: photons, gravity field, black hole entropy, life, universe, economics (Quora answer)

It is often said by experts that the entropy of the universe is not a well defined concept, so it's hard to discuss and is speculative (see Wikipedia "heat death of universe").  But I have included uncontested references as support of most the following statements.


The basis of all standard models of the universe that begin with a big bang is that the entropy of the universe is constant per large-scale expanding volume of the universe . See Stephen Weinberg's 1973ish popular book "The First Three Minutes" where he says this entropy constant is more fundamental than the energy+mass constant of the expanding volume.  Entropy is thereby said to be conserved on a "comoving" volume basis.    This means the average entropy of very large fixed volumes of the universe (where you do not let the meters expand with time due to Hubble's constant) is decreasing. Entropy in "empty" space per fixed volume is decreasing. Gravitational systems where mass is concentrated (like solar systems, black holes, and galaxies) are always emitting and thereby lowering their local entropy (I think of this emitted entropy as "allowing" the universe to expand). 
"Entropy always increases" only applies to isolated systems that do not let energy or mass to be exchanged with the surrounding environment. But there is no such thing as an ideally isolated system in nature (see Wikipedia on "isolated system"). "Entropy always increases" is not an exact statement of the second law (reference: search "Feynman entropy"). The exact statement is "W=Q1-Q2" where W is the maximum work that can be extracted from heat reservoirs at T1 and T2 delivering heat Q1 and Q2 which are at different temperatures. For an isolated system  that includes the reservoirs, 0=Q1-Q2-W. But any fixed walls surrounding it will have a temperature that will emit radiation (heat) energy and thereby lower their entropy.  An isolated system with mass inside may have a lower-than universal average entropy and may internally increase in entropy as well as release entropy. Gravity therefore must cause a decrease in local fixed-volume entropy as mass concentrates, and mass concentrating is a direct lowering of entropy. But not only is the mass coming together, but the space-time 4D volume is decreasing from a gravitational effect, further reducing entropy compared to outside meter and time sticks. The number of x*(m*v) states decreases as x decreases, increases as mass increases, and the m/s of v cancels, so gravity space-time warping does not change entropy. I am thinking of the atoms only in terms of a gaseous entropy since the solid state is more complicated, and prior to complete black hole collapse, but these assumptions should lead towards the limits in the right direction.

Why is the entropy of a black hole begin with the same quantity as the entropy of the star that collapses? (As I think is the common view.) Particles closer together is less entropy and I just showed the gravity field change does not have an effect.  Imagine 2 types of particles of a monoatomic gas (no rotational energy to keep things simple), one with positive charge and the other with negative, and they begin as randomly distributed.  As time goes on, let's say they attract and stick, creating a solid which has about 1/10 as much entropy.  Apparently, a potential energy gradient decreasing with distance causing an attractive force is negative entropy (since entropy is conserved). -d(P.E.) = tdS.  This agrees with the two equations I have for force: F=ma= -1*d(PE) and F=+TdS.  The energy in a gravity field decreases as particles come together, which apparently is a direct cause to increase entropy (potential energy is converted to entropy without causing heat). This causes any view of the gravitational field affecting entropy by supposedly affecting space-time to be very problematic since charges are not doing the same.
 
The Earth's surface  is an open system receiving Sunlight energy, frictional heat energy from the moon's gravity forces, and nuclear decay energy from the core, while releasing a lot of entropy to the universe in the form of photons. Earth's surface might be increasing, decreasing, or constant in its entropy (I can't find an answer and delve into the effect of life in the next paragraph). A geologically dead planet receiving sunlight has a constant amount of entropy, but it is increasing the universe's entropy by turning directed high-energy sunlight photons into a much larger number of undirected low energy photons by being heated up by the Sun. A photon of lower energy has the same amount of entropy as a high energy photon because its lower momentum is offset by it being in a larger region of space (# of states=x*p). So the undirected and larger number are what makes the re-radiated photons (of equal total energy to the incoming photons) have more entropy. See Wikipedia "photon gas" where S=4/3*U/T and since T is randomly directed kinetic energy per particle (see "ideal gas" temperature derivation) and U is total energy, the entropy of randomly directed photons is proportional to their number and inversely proportional to their degree of randomization and not individual energy.
 
Life is the result of an influx of Gibbs free energy from the Sun, moon, core, and pre-existing geology which produces spontaneous chemical reactions . "Genes" have no physical force in and of themselves, but are durable physical-matter catalysts that are the result of the past dynamics of the available matter under the physical-force influence of the pre-existing and continuing influx of Gibbs free energy. Genes have a continuing effect on the remaining non-gene matter because of the continuing influx of Gibbs free energy.  "Life creates" (more correctly: Gibb's free energy produces) solid structures which have lower entropy but it also emits gases (such as O2 can CO2) that have high entropy which approximately cancel the effect (according to my summing of the specific entropy (entropy/mole) of the products and reactants).   But the products of these reactions that we call life and find useful in economics are the parts that have lower entropy per mass compared to the original ores or gases that we used as inputs. 
 
Structures under higher and higher economic (thinking, cooperative) control have lower and lower entropy because having lower entropy means they are increasingly in "known" states which allows more control giving rise to steel, motors, solar cells, carbon fiber, nanotubes, and computer chips, in concert with pre-existing DNA crystals, bones, shells, and teeth. Stronger bonds almost always means lower entropy because the atoms can't jiggle as far away to different states (states ~ momentum times position=x*p). Entropy per mole of a material = a*ln(x*p)+b where a and b are constants for a given phase (gas, liquid, solid) of a given set of molecules or atoms. x*p depends on volume and temperature, and "a" and "b" depend on mass per particle, ratio of internal energy to temperature (random kinetic energy) of each molecule or atom, and the degree of dependency of the x*p states of each molecule or atom on each other (see "phonons").

Saturday, March 19, 2016

entropy as a force, evolution, economics, dynamics

He does not explain the variables clear enough for me to understand or use.  Your summary of it, which he states in a similar way, is a common tactic in competition against others or nature when the future is less predictable.  But in well defined problems that you can model completely, there is a specific path chosen that can greatly violate this rule (as viewed from the perspective of someone with less knowledge). But from an omniscient view, knowing the best path is like never having an option except to choose that path. Not knowing or being able to control the future is intrinsic to wanting to keep options open. "Options" depends on amount of knowledge like "order" and entropy.  "Entropy" is a measurable thing not subject to opinion, but it is a willful blinding of the most of the data: it throws out the details of the exact momentum and position of each particle and relies only on the bulk measurements of T, P, N, and V.
I believe "keeping options open" is equal to not spending Gibbs free energy which a combination of potential energy and pre-existing order.  He shows a weight on a stick that automatically balances. I do not know how he got from his equation to that, but maintaining potential energy achieves the same thing. It is also the lowest-entropy state: we can see it is straight up which is a defined position, but it may fall in many different directions. It is keeping options open, and yet it is preventing entropy from being increased.  As you've listed the equation, the standard entropy force, it should immediately fall. This is not exactly the equation he uses. His subscripts are saying max entropy gradient over a certain time, or something like that. It seems his equation is maintaining the gradient at a max which is right before it "decides" which way to fall. 
We keep assets (potential energy in a low entropy state) that we believe will maintain value (be convertible to work energy with minimal entropy production, aka efficient) in times of turmoil. It could be dollars, gold, bitcoin, skill, or social contacts. But if we know the future, we spend it on other things that will result in the most long term profit (acquisition of other low entropy potential energy assets like a stock that may fluctuate wildly in this value. The end goal is supposedly to transfer the potential energy gains to our offspring so they can survive in the future, where "offspring" could be children, ideas, machines, or the greater community. 
You have cast this physics idea as under the control of intelligence, but I believe the physics of matter causes intelligence to occur.  I believe the "meaning of life" (the goal of godless, mindless dynamics) is to emit entropy to the universe while decreasing it locally and increasing local potential energy.  For example, biology seems to be replacing itself with more efficient machines that depend on very strong bonds in order to acquire sunlight energy and to use that energy to move matter, make structures, and to think about how to do it most efficiently and profitably (more potential energy and lower entropy in the end).  These very strong bonds in today's society depend on oxygen atoms being removed from carbon (nanotubes, graphite, carbon fiber), metal ores, and silicon (CPUs and Solar cells). I do not know the chemistry of cement well enough to include it.  Imagine society if the silicon, metals, and carbons did not exist (or they still had their natural oxygens attached). Look at the exponential trends of these increasing as biology decreases in relevance to the economic (economizing) machine.  Economizing means spending energy and order (ores in a vien is pre-existing low entropy) effieicntly, but what is economics efficiently trying to do?  The will of people?  No.  That assumes mind controls matter, but mind is not seperable from body, and mind has no rpe-existing force dynamics did not give it.  Believing mind controls us is like believing in a soul. It's a handy but false model like a "selfish" gene.  Mind and genes are implementing physical dynamics.  Genes are the physical memory of past dynamics.  Mind is a chemical reaction, not a pre-existing or fundamental force. 
Evolution is the result of the dynamics of matter.  Matter moves as a result of energy potentials (F=-dU) and low entropy (F=TdS). We think mind controls, but it is the expression of past dynamics under these two forces.  Schrodinger initially said "negative entropy" was key to life, but in an update he corrected himself to say it was Gibb's free energy, dG=dU+pdV-TdS.  For Earth's surface, p, V, and T are kind of constant, and the remaining dU and dS are the two forces I've just mentioned. The minus sign works out because here we have F=T*dS and all other forces are the result of F=-dU. F=ma implies an intelligence behind the F to move the m, but there it is ultimately sourced from a -dU and TdS.    -dU+TdS=F=ma, not Mind+genes=F=ma.
I view evolution as dynamics creating memory/computation systems that catalyze matter into forming more memory/computational systems.  Structures and bodies are the support structures for minds to create more minds, so it is easy to see why we think of mind as a primary force. But the primary source is dynamics.  "We" (dynamics) cash in on the environment's dU and -dS to replenish and add more of them to our own systems (energy is released if we put the oxygen back into our silicon, metals, and carbon, as well as entropy increasing back to where it was). The dynamics trend is to create local order and increased potential energy, cashing in on energy from the sun. Fossil fuels are being depleted, but there's potential energy in the new bonds we are creating and they are harder to get started in burning.  Entropy does not increase on Earth because it is an open system. 
When oxygen is removed, the refined solids have 1/5 as much entropy. The waste heat and entropy emitted in the refining is released to the Universe, leaving a much lower entropy in the solids that make up society. They also acquire a large amount of potential energy like fossil fuels compared to the previous ores.  Denser materials result in lower entropy because less flexibility means fewer position*momentum states that can be occupied at a given temperature.  This also means the materials are more predictable and controllable in a deep sense that makes them more efficient in structures, thinking machines, energy acquisition, and movement of matter. 

Monday, March 7, 2016

Drawing homer simpson with fourier transform

I was able to use the pixels directly as an imaginary number  x+i*y in the 3rd equation on wiki's discrete fourier transform article. This means in excel I had a table for the real part of each Xk that was x*cos(-2*pi*k*n/N)-y*sin(-2*pi*k*n/N) for each x pixel value as n pixels (rows) went from 0 to N and as k columns also varied from 0 to N, so it was a table with N rows and N colums. Then another table for the Xk imaginary part was y*cos()+x*sin(). Each column was summed for each table, so there was a real and imaginary part for each k, which is the goal, the fourier transform.  To generate the image back again from these imaginary numbers, the 4th equation on the wiki page is used.  So the R of each circle in the homer simpson epicycle video is each |Xk|/N. The rotation rate for each circle is 2*pi*k.  n/N is the "time", the fraction of the circle that has been completed.  k=0 gives the offset of the image away from the origin.  k=1 is the largest circle with the slowest rate.  The angle of Xk is an offset. 

The pixels do not have to need to be equal distance apart on the path of the line trace, and you may not even need to reverse course in collecting pixels on the line if an end-of-line is reached. Notice this occurs on the back hair in this video: the course was traced back on the hairs.  My version of excel can only do 243 points due to column limitations. It's a lot easier to program it than to use a spreadsheet. mobilefish site allowed me to extract xy coordinates from an uploaded image, but i could not get really them equal distance apart so my results were not very good. The more points you have, the more precision you have to have in making sure they are equal distance apart.  Also, you have to end 1 position next to pixel you began on.

Here's the video of drawing homer simpson as a sum of sinusoids

https://youtu.be/QVuU2YCwHjw

Here are my results on homer simpson and the batman symbol:



youtube comment giving details on how to do it:
This turned out very easy: Let the pixels equal an imaginary number  x+i*y in the 3rd equation on wiki's discrete fourier transform article. This gave me a table for the real part of each Xk that was x*cos(-2*pi*k*n/N)-y*sin(-2*pi*k*n/N) for each x pixel value as n pixels (rows) went from 0 to N and as k columns also varied from 0 to N, so it was a table with N rows and N colums. Then another table for the Xk imaginary part was y*cos()+x*sin(). Each column was summed for each table, so there was a real and imaginary part for each k, which is the goal, the fourier transform. To generate the image back from these imaginary numbers, the 4th equation on the wiki page is used (each circle in this video is from the 2nd equation and is 1/N*Xk*e^(i*z) = R*e^(i*(z+Wk)) where Xk=A+i*B=Rk*e^(i*Wk) from Euler's formula and z=2*pi*n*k/N). So the R of each circle is each |Xk|/N. You can reverse course in collecting pixels on the line if an end-of-line is reached and you're drawing a line between points. Notice this occurs on the back hair in this video. My version of excel can only do 243 points due to column limitations. It's a lot easier to program it than to use a spreadsheet, but I can't program the plotting. Mobilefish site allowed me to extract xy coordinates from an uploaded image .

Thursday, March 3, 2016

Currency should "reproduce" as a result of and/or to allow infrastructure to reproduce in the same proportion

If the sole objective of an economy is to survive and reproduce, then the currency (that gives agents permission to act using the systems resources) should be increased to match a recent or expected increase in the ability of the infrastructure survive and reproduce. Deciding how to inject the new currency is part of the intelligence of the governing of the economy and currency.  Evolution indicates that "happiness per person" is not the "infrastructure" goal that is being increased. The goal appears to be "harder" memory systems. It is not only longer lasting "genes" (in addition to ability to reproduce accurately* and plentifully**) in the face of assaults that benefits from "hardness" (strength of bonds) but hardness also results in fewer possible states (lower entropy) that seems to give rise to better communication within the system (i.e. better control, cooperation, and modeling even in competitive interaction that is seeking better end results). The hard structures support highly controllable and flexible movement of the much smaller "thought" elements (molecules, ions, electron, and/or photons) that are needed for modeling possibilities or algorithms and catalysts implementing known actions working towards the hard support structures. Harder structures result from strong bonds which are more dense, not counting gaps or air pockets that might be needed for lightness, insulation, or encapsulation. Stronger bonds means lower entropy because fewer states per molecule are possible (for a given temperature, S=k*ln(1.46*px/h for a single harmonic oscillator (atoms or phonons) in constrained 1D). 

Happiness in a majority of individuals is a side effect of a succeeding system, not an apparent goal of evolution. Infrastructure is a mass of matter that includes algorithms, people, and other machines. It appears the primary purpose these days is to remove oxygen from metals, metalloids, and carbon which creates an infrastructure that has lower entropy, and to discover and tap into sunlight or more ores that are low-entropy concentrations that can be used to incorporate more mass into hard structures that are within a cooperative system. It emit oxygen gas which does not cause a net entropy reduction, but the mass within system itself is lower entropy.  More efficient algorithms are lower real entropy due to Landauer's limit. 

Is lowest entropy/mole the most survivable and expanding system? Should a governing system make laws and expand currency where it projects the greatest entropy/mole reduction will occur?  Does this follow from and/or assist the least action principle?   Is lower entropy/mole = happiness/mole? Is happiness = simplicity, a single state, or the progression towards it? More likely (more traditionally) evolutionary success = dGibbs F.E. = dU - T(dS/mole)/dt?  Is the efficiency sought the ability to use external Gibbs free energy to increase controllable Gibbs free energy?

*or modified if it's necessary to hide from viruses or exploit nearby solutions. **Also, the entire idea of faithful reproducing is not a "goal" of "selfish" genes but is just the best solution found so far that potential energy gradients are pushing into existence.

Still weak on my practical entropy equations, but here goes:
If each independent agent (or mole of matter) has lower kinetic energy or entropy, then less heat (random kinetic energy) is present via Q=S*T.  If incoming Sun energy is not simply being reflected back to space (albedo factor has not changed), then a lower Q means more potential energy is being stored. This lower average K.E. minus average potential energy which is a lower value for least action, as I have been expecting (evolution towards lower entropy is simply an expected result of dynamics).  If there are more oxygen atoms in atmosphere instead of CO2, T will be lowered which may decrease local Q without decreasing S.  Solar cells increase T which increases Q without necessarily lowering S.