Showing posts with label Ising-Lenz model. Show all posts
Showing posts with label Ising-Lenz model. Show all posts

Wednesday, 21 July 2021

A New Matrix Mathematics for Deep Learning : Random Matrix Theory of Deep Learning

 Preamble 

    Figure: Definition of Randomness
 (Compagner 1991, Delft University)
Development of deep learning systems (DLs)  increased our hopes to develop more autonomous systems. Based on the hierarchal learning of representations, deep learning defies the basic learning theory that beg the question of still rethinking generalisation. Even though DLs lacks severely the ability to reason without causal inference, they can't do that in vanilla form. However despite this limitation, they provide very rich new mathematical concepts as introduced recently. Here, we review couple of these new concepts briefly and draw attention to Random Matrix Theory's relevance in DLs and its applications in Brain networks.  These concepts in isolation are subject of applied mathematics but their interpretation and usage in deep learning architectures are demonstrated recently. In this post we provide a glossary of new concepts, that are not only theoretically interesting, they are directly practical from measuring architecture complexity to equivalance.  

Random matrices can simulate deep learning architectures with spectral ergodicity

Random Matrix Theory (RMT) has origins in foundation of mathematical statistics and mathematical physics pioneered by Wishart Distribution and Dyson Circular Ensembles.  As primary ingredient of a deep learning model as a result are set of weights, or learned parameter set, manifests as matrices and they come from a learning dynamics that are used in so called in inference time. Natural consequence of this, learning these matrices can be simulated via Random matrices of spectral radius close to unity. This provides us the following, ability to make a generic statement about deep learning systems independent of 
  1. Network architecture (topology).
  2. Learning algorithm. 
  3. Data sizes and type.
  4. Training procedure.

Why not Hessian or loss-landscape but Weight matrices? 

There are studies taking Hessian matrix as a major object, i.e., second derivative of parameters as a function of loss of the network and associate this to random matrices. However, this approach would only covers learning algorithm properties rather than architectures inference or learning capacity. For this reason, weight matrices should be taken as a primary object in any studies of random matrix theory in deep learning as they encode depth in deep learning. Similarly, loss-landscape can not capture the capacity of deep learning. 

Conclusion and outlook

In this short exposition, we tried to stimulate readers interest in exciting set of tools from RMTs for deep learning theory and practice. That is still subject of recent research with direct practical relevance. We provided glossary and reading list as well.  

Further Reading

Papers introducing new mathematical concepts in deep learning are listed here, they come with associated Python codes for reproducing the concepts.

Earlier relevant blog posts 

Citing this post

A New Matrix Mathematics of Deep Learning: Random Matrix Theory of Deep Learning : https://science-memo.blogspot.com/2021/07/random-matrix-theory-deep-learning.html Mehmet Süzen, 2021

Glossary of New Mathematical Concepts of Deep Learning

Summary of the definition of new mathematical concepts for new matrix mathematics.

Spectral Ergodicity Measure of ergodicity in spectra of a given random matrix ensemble sizes. Given set of matrices of equal size that are coming from the same ensemble, average deviation of spectral densities of individual eigenvalues over ensemble averaged eigenvalue. This mimic standard ergodicity, instead of over states of the observable, it measures ergodicity over eigenvalue densities.  $\Omega_{k}^{N}$, $k$-th eigenvalue and matrix size of $N$.

Spectral Ergodicity Distance A symmetric distance constructed with two Kullback-Leibler distances over two different size matrix ensembles, in two different direction. $D = KL(N_{a}|N_{b})+ KL(N_{b}|N_{a})$

Mixed Random Matrix Ensemble (MME) Set of matrices constructed from a random ensemble but with difference matrix sizes from N to 2, sizes determined randomly with a coefficient of mixture. 

Periodic Spectral Ergodicity (PSE) A measure of Spectral ergodicity for MMEs whereby smaller matrix spectrum placed in periodic boundary conditions, i.e., cyclic list of eigenvalues, simply repeating them up to N eigenvalues. 

Layer Matrices Set of learned weight matrices up to a layer in deep learning architecture. Convolutional layers mapped into a matrix, i.e. stacked up. 

Cascading Periodic Spectral Ergodicity (cPSE) Measuring PSE over feedforward manner in a deep neural network.  Ensemble size is taken up-to that layer matrices. 

Circular Spectral Deviation (CSD) This is a measure of fluctuations in spectral density between two ensembles.

Matrix Ensemble Equivalence If CSDs are vanishing for conjugate MMEs, they are said to be equivalent.

Appendix: Practical Python Example

Complexity measure for deep architectures and random matrix ensembles: cPSE.cpse_measure_vanilla Python package Bristol  (>= v0.2.12) has now a support for computing cPSE from a list of matrices, no need to put things in torch model format by default.


!pip install bristol==0.2.12


An example case:


from bristol import cPSE

import numpy as np

np.random.seed(42)

matrices = [np.random.normal(size=(64,64)) for _ in range(10)]

(d_layers, cpse) = cPSE.cpse_measure_vanilla(matrices) 


d_layers is decreasing vector, it will saturate at some point, that point is where adding more

layers won’t improve the performance. This is data, learning or architecture independent measure.

Only a French word can explain the excitement here: Voilà!





Sunday, 27 December 2020

Statistical Physics Origins of Connectionist Learning:
Cooperative Phenomenon to Ising-Lenz Architectures

This is an informal essay in aiming at raising awareness that Statistical Physics played a foundational role in deep learning and neural networks in general beyond being a mare analogy but its origin

Article version of this post is available here: doi. and on HAL Open Science

Preamble

A short account of origins of mathematical formalism of neural networks is presented for physicists and computer scientist in basic discrete mathematical setting informally. The discourse of the development of mathematical formalism on the dynamics of lattice models in statistical physics and learning internal representations of neural networks as discrete architectures as quantitative tools evolve in two almost distinct fields more than half a century with limited overlap. We aim at bridging the gap by claiming that the analogy between two approaches are not artificial but naturally occuring due to how modelling cooperative phenomenon is constructed. We define the Lenz-Ising architectures (ILAs) for this purpose.

Introduction


Tartan Ising Model
Figure: Tartan Ising Model
(Linas Viptas-Wikipedia)
Understanding natural or artificial phenomenon in the language of discrete mathematics is probably one of the most powerful toolbox scientist use [1]. Large portion of computer science and statistical physics deals with such finite structures. One of the most prominent successful usage of such approach was Lenz and Ising’s work on modelling ferromagnetic materials [2–5] and neural networks as a model to biological neuronal structures [6–8].

The analogy between two areas of distinct research have been pointed out by many researchers [9–13]. However, the discourse and evolution of these approaches were kept as two distinct research fields and many innovative approaches rediscovered under different names.

Cooperative Phenomenon

Statistical definition of cooperative phenomenon pioneered by Wannier and Kremer [14–16]. Even though their technical work focused on extension of Ising model to 2D with cyclic boundary condition and introduction of exact solutions with matrix algebra, they were the first to document the potential of how Lenz-Ising model actually represent a more generic system than merely model to ferromagnets, namely anything falls under cooperative phenomenon can be addressed with Lenz-Ising type model, summarised in Definition 1.

Definition 1: Cooperative phenomenon of Wannier type  [14]: Set of $N$ discrete units, $\mathscr{U}$, identified with a function $s_{i}$, i=1,..,N forms a collection or assembly. The function that identifies the units is a mapping $s_{i}: \mathbb{R} \rightarrow \mathbb{R}$. A statistic $\mathscr{S}$ applied on $\mathscr{U}$ is called cooperative phenomenon of Wannier type $\mathscr{W}$.

A statistic $\mathscr{S}$ can be any mapping or set of operations on the assembly of units $\mathscr{U}$ . For example inducing ordering on the assembly of units and summation over  $s_{i}$ values, would corresponds to non-interacting magnetic system with unit external field or non-connected set of neurons capacity of inhibition or exhibition. However, amazingly, Definition 1 is so generic that Rosenblatt’s perceptron [17], current deep learning systems [18] and complex networks [19] falls into this category as well. 

The originality of Cooperative phenomenon of Wannier type comes on a secondary concept, so called event propagation as given in Definition 2.

Definition 2. Event propagation [14] An event is defined as a snapshot of cooperative phenomenon of Wannier type $\mathscr{W}$. If an event takes place of one unit of assembly $\mathscr{U}$, the same event will be favored by other units, this is expressed as event propagation between two disjoint set of units $\mathscr{E}(u_{1}, u_{2})$, and $u_{1} \cap u_{2} = \varnothing$ and $u_{1}, u_{2} \in \mathscr{U}$ and with an additional statistic $\mathscr{S}$ is defined.

The parallels between Wannier’s event propagations are remarkably the same as of neural network formalism defined by McCulloch-Pitts-Kleene [6,7], not only conceptually but matematical treatment is identical and originates from Lenz-Ising model’s treatment of discrete units. As we mentioned, this goes beyond doubt not a simple analogy but forms a generic framework as envisioned by Wannier. The similarity between ferromagnetic systems and neural networks is probably first documented directly by Little [8]: Spin states of magnetic spins corresponds to firing state of a neuron. Unfortunately, Little only see it as simple analogy, and missed the opportunity provided by Wannier as a generic natural phenomenon of cooperation.

The conceptual similarity and inference on Wannier’s event propagation appears to be quite close to Hebb’s learning [20] and gives natural justification for backpropagation for multilayered networks. History of backpropagation is exhaustively studied elsewhere [18].

Lenz-Ising Architectures (ILAs): Ferromagnets to Nerve Nets


Ernst Ising
 Image owner APS - Physics Today :
Obituary
As we established two basic definitions of cooperative phenomenon, we can now define a generic setting of Lenz-Ising model that captures both physics literature that extensively used this in so called spin-glasses research and for neural networks. A guiding principle will be based on Wannier’s definition of cooperative phenomenon.

Definition: Lenz-Ising Architectures (ILAs) 
Given Wannier type cooperative phenomenon $\mathscr{W}$, imposing constrains on the discrete units, $\mathscr{U}^{c}$ that they should be spatially correlated on the edges $E$ of an arbitrary graph $\mathscr{G}(E, V)$ with ordering and with vertices $V$ of the arbitrary graph carring coupling weight between connected two units with biases. Set of event propagations $\mathscr{E}^{c}$ defined on the cooperative phenomeon can induce dynamics on defining vertice weights, or vice versa. ILAs are defined as statistic $\mathscr{S}$ applied to $\mathscr{U}^{c}$ with propagations $\mathscr{E}^{c}$. 

Lenz-Ising Architectures (ILAs) should not be confused with graph neural networks as it does not model data structures. It could be seen as subset of graph dynamical systems in some sense but formal connections should be established elsewhere. However, primary characteristic of ILAs are that it is conceptual and mathematical representation of spin-glass systems (including Lenz-Ising, Anderson, Sherrington-Kirkpatrick, Potts systems) and neural networks (including recurrent and convolutional networks) under the same umbrella.

 Learning representations inherent in Metropolis-Glauber dynamics

The primary originality in any neural network research papers lies in so called learning representation from data and generalisation. However, it isn’t obvious to the that community that actually spin-glasses are capable of learning representations inherently by induced dynamics such as Metropolis or Glauber dynamics by construction, as an inverse problem.

In physics literature this appears as finding a solution to the problem of how to express free energy and minimisation of this with respect to weights or coupling coefficients, This is noting but a learning represenations. Usually a simulation approach is taken as a route, for example Monte Carlo techniques [5, 21, 22] via Metropolis or Glauber dynamics. The intimate connection between concepts of ergodicity and learning in deep learning is recently shown [13,23,24] in this context.

Roy J. Glauber (Wikipedia)  
Glauber dynamics

As we argued earlier the generic definition provided by Wannier on cooperative phenomenon and ILAs; there is an intimate connection with learning and so called solving spin-glasses that usually boils down to computing free energies as mentioned. And a link between two distinct fields, computing backpropagation and free energies are natural candidates to establish equivalence relations.

Conclusions and Outlook

Apart from honouring physicists Lenz and Ising, based on understanding of cooperative phenomenon’s origins, naming the research outpus from of spin-glasses and neural networks under an umbrella term Lenz-Ising architectures (ILAs) is historically accurate and technically a resonable naming scheme under the overwhelming evidence given in the literature. This is akin to naming current computers with von Neumann architectures. This forms the origins of connectionist learning from statistical physics, where this approach currently enjoying vast engineering success today.

The rich connection between two areas in computer science and statistical physics should be celebrated. For more fruitful collaborations, both literatures, embracing large statistics literature as well, should converge much more closely. This would help communities to avoid awkward situations of reinventing the wheel again and hindering recognition of the work done by physicists decades earlies, i.e., Ising and Lenz.

 Notes

No competing or other kind of conflict of interest exists. This work is produced solely with the aim of scholarly work and does not have any personal nature at all. This essay is dedicated in memory of Ernst Ising for his contribution to physics of ferromagnetic materials, now seems to have far more implications.

References

[1] Kenneth H Rosen. Handbook of Discrete and Combinatorial Mathematics. CRC Press, 1999. 

[2] W.Lenz. Beitrag zum Verstl ̈andnis der Magnetischen Erscheinungen in Festen Korpern. Phys.Z21:613, 1920.

[3] Ernst Ising. Beitrag zur Theorie des Ferromagnetismus. Zeitschrift furr Physik, 31(1):253–258, 1925.

[4] Thomas Ising, Reinhard Folk, Ralph Kenna, Bertrand Berche, and Yurij Holovatch. The fate of Ernst Ising and the fate of his model. arXiv preprint arXiv:1706.01764, 2017.

[5] David P Landau and Kurt Binder. A guide to Monte Carlo Simulations in Statistical Physics. Cambridge University Press, 2014.

[6] W.S. McCulloch and W.H. Pitts. A Logical Calculus of the Ideas Imminent in Nervous Activity.Bull. Math. Biophys.,(5), pages 115–133.

[7] Stephen Cole Kleene. Representation of Events in Nerve Nets and Finite Automata. Technical report, RAND Project, Santa Monica, 1951.

[8] W. A. Little. The Existence of Persistent States in the Brain. Mathematical Biosciences, 19(1-2):101–120, 1974.

[9] P Peretto. Collective Properties of Neural Networks: a Statistical Physics Approach. Biological Cybernetics, 50(1):51–62, 1984.

[10] Jan L van Hemmen. Spin-glass Models of a Neural Network. Physical Review A, 34(4):3435, 1986.

[11] Haim Sompolinsky. Statistical Mechanics of Neural Networks. Physics Today, 41(21):70–80,1988.

[12] David Sherrington. Neural Networks: the Spin Glass Approach. In North-Holland MathematicalLibrary, volume 51, pages 261–291. Elsevier, 1993.

[13] Yasaman Bahri, Jonathan Kadmon, Jeffrey Pennington, Sam S Schoenholz, Jascha Sohl-Dickstein, and Surya Ganguli. Statistical Mechanics of Deep Learning. Annual Review of Condensed Matter Physics, 2020.

[14] Gregory H Wannier. The Statistical Problem in Cooperative Phenomena. Reviews of Modern Physics, 17(1):50, 1945.

[15] Hendrik A Kramers and Gregory H Wannier. Statistics of the two-dimensional ferromagnet.Part I. Physical Review, 60(3):252, 1941.

[16] Hendrik A Kramers and Gregory H Wannier. Statistics of the two-dimensional ferromagnet.Part II. Physical Review, 60(3):263, 1941.

[17] C van der Malsburg. Frank Rosenblatt: principles of neurodynamics: perceptrons and the theory of brain mechanisms. In Brain theory, pages 245–248. Springer, 1986.

[18] J. Schmidhuber. Deep learning in Neural Networks: An overview. Neural networks, 61:85–117, 2015. & Yoshua Bengio, Yann Lecun, Geoffrey Hinton, Communications of the ACM, July 2021, Vol. 64 No. 7, Pages 58-65 (2021) link

[19] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. Nature,393(6684):440, 1998.

[20] Donald Olding Hebb. The Organization of Behavior: a Neuropsychological Theory. J. Wiley;Chapman & Hall, 1949.

[21] Mehmet Suezen. Effective ergodicity in single-spin-flip dynamics. Physical Review E, 90(3):032141, 2014.

[22] Mehmet Suezen. Anomalous diffusion in convergence to effective ergodicity. arXiv preprint arXiv:1606.08693, 2016.

[23] Mehmet Suezen, Cornelius Weber, and Joan J Cerda. Spectral ergodicity in deep learning architectures via surrogate random matrices. arXiv preprint arXiv:1704.08303, 2017.

[24] Mehmet Suezen, JJ Cerda, and Cornelius Weber. Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search. arXiv preprint arXiv:1911.07831, 2019.


Postscript 1:

(Deep) Machine learning as a subfield of statistical physics

Often researchers considers some machine learning methods
under different umbrella terms compare to established
statistical physics. However, beyond being mare analogy,  
application of these methods are quite striking. Consequently,
there is a great tradition in machine learning practice 
of being sub-field of statistical physics with explicit
classification within PACS. 

Hopfield Networks <- Ising-Lenz model
Boltzmann Machines <- Sherrington-Kirkpatrick model
Diffusion Models <- Langevin Dynamics, Fokker-Planck Dynamics
Softmax <- Boltzmann-Gibbs connection to partition function 
Energy Based Models <- Spin-glasses, Hamiltonian dynamics

For this reason, we provide semi-formal mathematical definitions
in the recent article, establishing that deep learning architectures 
should be called Ising-Lenz Architectures (ILAs), akin to calling 
current computers having von Neumann architectures.

(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.