Sometime I want to learn this stuff ( http://www.schwartz-home.com/RIOTS/ and http://www.sbsi-sol-optimize.com/index.htm and https://en.wikipedia.org/wiki/Bellman_equation ) and see how it might be combined with compression/pattern recognition techniques to make A.I. A smart machine seems to need the following:
1) desire (setpoint)
2) environment
3) modeling/predicting/compressing 2)
4) identifying which inputs (or levers inside the model) can be controlled for maximizing 1)
5) controlling the identified inputs or levers for maximizing 1)
Item 5) is called "optimal control", the goal of the links above. Item 4) needs to be defined before 4 can be applied. This seems to be what's missing in automated A.I. that seeks to maximize profit from a general environment. It seems 3) and 5) have a lot of research behind them. Maybe evolution (competition for max energy sources) has preprogrammed animals to possess 4). But humans seem good at taking the generalization a step higher. The modeling part, if HTM/CLA methods are correct, needs nested prediction competitors, hierarchical. It's interesting that SNOPT in the 2nd link above uses sparse techniques to solve non-linear optimization problems, and that the Bellman method solves the problem working backwards from the endpoint, and that the cost function that needs to be minimized is a Lagrangian and Hamilton's name is also attached (Bellman's equation is the discrete verson of the Hamiliton method).
No comments:
Post a Comment