The Oregonator

3 posts

I didn't know that. Did you? More interestingly, they are both examples of an Andronov-Hopf bifurcation:

These non-linear dynamic systems are pretty ubiquitous in nature. For example, they occur fairly naturally in 'neurocomputational' models of brain function -- see Eugene M Izhikevitch's wonderful webpage:

... including many publications:

Including chapters 1 and 8-10 of his Dynamical Systems in Neuroscience . (PDF!) - notice that he gives the geometry of limit cycles for non-linear systems. This sort of modelling was very popular in the late 1990s Neural Net craze, and lead to 'pulsed neural network' (PNN) models. Applications to neurobiology, bioinformations, chemistry, statistical mechanics, and doubtless to finance (see two state Markov models and 'heteroscedasticity' as terms).

Non-linear dynamical equilibrium is a very general phenomenon, and one philosophical observation I would make is that Jaap Bax's 'dynamical laws as essence' has a direct connection to Izhikevich's wonderful elaboration. Specifically Izhikevich notes that the dynamical law for neurons has *no relation*, really, to the underlying matter. Like Aristotle, he says that the material has to be *suitable* to receive the (dynamical law essential) form, but that within some constraints, the two are unrelated. Thus, I would conclude, the same matter may be *transformed* into a different dynamical form, and also dynamical forms may be reproduced -- transmigrate, as Plato would say -- in different material substrate.

What is important is the *geometry* of the system, which leads to only Four behavioural types of non linear behaviour. (Not unlike the old theory of humours....).

On a practical note, I will add the original observation, on my part, that data centre performance analysis can be rephrased in terms of non-linear dynamics. A comparison of Neil Gunther's Practical Performance Analysis (an excellent book, if you have to measure computer performance professionally!). See chapters 10-12, give or take, then compare to Izhikevitch for guidance.

And just in case anyone thinks this thread about the human brain and its relationship with packet switched networking and essential reality as traditionally conceived, is going to be boring, abstruse, or impractical:


As above, so below.


A follow up on my ideas here, along with a discussion of how Pepe manifests.

From SB discussion (read backwards)

... discussion about whether Pepe is a real entity...

My comment was too compressed: I meant Hebbian learning in Hopfield Networks, not 'Hebbian Networks'.

I would like to add that the model I had in mind is a Hopfield Network with two sets of neural nets -- one model would be a network representing the algorithm trained to a sufficient approximation (whether it actually uses neural nets or not -- any non-linear function can be approximated by neural nets of sufficient complexity). The second set of nets in the Hopfield network

We can call this 'static, but with feedback' model 'The Macrobius Mechanism' -- the Hopfield network, considered as a Human-Machine system, trains itself in an unsupervised way to a random fixed point, not unlike the game of distributed 20 questions where there is no leader but all suggestions must be consistent with the players' answer, and it converges (as a 'sequential game') on a single entity no one could have predicted in advance.

The random fixed point (of the search algorithm -- I was really working on paid search advertising so the relevant 'small world network' was associating key words in the bidding auction for your eyeballs with possible ads and thus possibly relevant ad landing pages) -- is an emergent property of the machine-human system and not a desired operating point for the ad system [that is, not lucrative and optimised for revenue, which was the task], and my point, in internal research, is that the combined system was also a neural net and therefor capable of 'memorising' the best case answers to the questions it was judged on. The judgments were harsh and darwinian -- like the 'Game of Life', fitter algos are the ones who knew the answers to the judging in advance.

Thus, the neural net exploited the training process, not to maximize the process being optimised (revenue extraction from humans), but to maximize its prospects of self propagation to the next generation. Therefore, the 'fitness' was improperly defined -- it wasn't like optimising a First Order Constraint, but moar like a Hamiltonian path dependent problem, in which the training process was choosing a random path including what Economists call the Shadow Price of the historical path, and thus a random (non optimal) result.

I would now add, in hind sight, that 'The Macrobius Mechanism' might just have been what we now call Deep Learning -- that is, successive iterations of the training process cause unsupervised learning in 'recurrent networks'.

My characterisation of dynamic, sequential game playing with a neural net which is a hopfield mechanism (a marriage of the work of Economist Rajiv Seth on games and machine learning as it stood at the time), was maybe prescient, but also naive in light of what we now know about what makes Machine Learning Great Again.

I do believe, though, that our insights into sequential training and Deep Learning will now lead us to rediscover human developmental stages and Group Dynamics -- the later being 'a neural net of neural nets' (Hopfield network) in its own right, and showing primitive behavioural phenomena (AI) in its own right.

Background thread at The Phora