Relaxation networks
31 Jan 2019

...Back to page 1

The simplest kind of computational network is one that does no "work". Its nodes perform no calculation, other than to average the values of their inputs and send the result on to other nodes which do the same. The weight assigned to each connection is 1, so there is neither amplification nor attenuation of any signal. This kind of network is "unpowered", and simply decays from its initial state to some final state. The initial state of the network is a set of random, but unequal, values (either real numbers, complex numbers, or vectors of complex numbers), assigned one to each node. In general, when started, this kind of network will converge to a single value at all nodes that is not one of the initial values, but is near to the average of all initial values. Starting the network from a different initial state, with a permuted set of values assigned to each node, results in converging to a different number:


There are some caveats to this general behavior:

  1. We assume the network is well-connected in the sense that every node has several inputs to average. These inputs can come from any nodes in the network (including self-input), irrespective of location. If the value of a node and each of its inputs is the same, the average will also be the same, and there will be no change to the output. Thus, every real value is an "eigenvector" of a node (with eigenvalue 1), and eigenvectors of the overall network are vectors of (any) equal reals.
  2. If a node has no inputs, it will become zero on the first step and propagate that zero to the rest of the network, eventually forcing all nodes to converge on zero.
  3. If a node has itself as the only input, its value will remain the same, and that value will be propagated to the rest of the network, eventually forcing all nodes to converge on that value.
  4. If a network has some nodes with zero inputs, and some with only themselves as inputs, the final state is undefined (although there might be a network eigenvector with unequal values).

The networks shown above are globally connected with a connection probability of 0.5 per pair of nodes (including self-inputs). As the connection probability is decreased, some nodes will have no inputs, or only inputs from themselves:


A similar situation occurs in networks with complex values. Sometimes these networks converge on a single complex value at all nodes, and sometimes they converge on a set of different, but stationary, values at each node. In the complex case, connection weights can be equal to 1, or they can be distributed on the unit circle of the complex plane (having magnitude 1 but phases from -pi to pi), either randomly, or based on the distance between nodes, so that inputs experience a phase delay which increases with distance. When complex inputs are averaged, they do so as 2d points in the plane, so that the phase differences between them become very important:




As the connectivity of the network is made more local (the probability distribution of input vs. distance favors closer connections) and more sparse (fewer inputs per node), eventually the network ceases to be stationary: some nodes no longer converge on constant values. This is due to the emergence of loops in the connectome which keep cycling the same sequence of values through a corresponding set of nodes. These patterns are easy to see, and quite interesting, in 2d:


Networks containing two complex numbers per node represent 1-bit elliptical states on the computational sphere. In general, well-connected global networks of this type also converge on a single state, or on a small number of states:





As these networks become more local and more sparse, they also become more dynamic. Again, these are "unpowered" networks (no amplification or attenuation) which do no computational "work" other than to average a set of input states and send the result on to other nodes:


Here are some additional interesting state networks:






The topic of loops in computational networks is both interesting and worthwhile. For example, using loops, it is possible to create networks that have constant overall state, but with a constantly changing sequence of node values:


This phenomenon might be important in real-world networks that have to "keep going" at all times due to some physical or biological constraints, but which need to maintain a constant, or slowly varying, state. Using phase delays, even the simplest loop can exhibit complex node behavior, but simple state dynamics:


By making connections "birefringent" (i.e. having separate phase delays for each component of a state), the state dynamics can be made more complex in a systematic way without increasing the complexity of the network:



At present, I am concerned with an even simpler looping network of just two nodes. In this network, each node performs the simplest kind of computation on a 1-bit state: a hermitian operator. This is the topic of the next two pages.


On to page 3...
©Sky Coyote 2018-2019