Monday, 30 November 2020

Re-discovery of Inverse problems: What is underspecification for machine learning models?

Radon, founder of 
inverse problems (Wikipedia)

This is a very well known concept in geophysics to image reconstruction communities many decades. Underspecification stems from Hadamard's definition of well-posed problem. It isn't a new problem. If you do a research on underspecification for machine learning, please do make sure that relevant literature on ill-posed problems are studied well before making strong statements. It would be helpful and prevent the reinvention of the wheel.
  

One technique everyone aware of is L2 regularisation, this is to reduce ill-possedness of machine learning models. In the context of how come a deployed model's performance degrade over time, ill-possedness play a role but it isn't the sole reason. There is a large literature on inverse problems dedicated  to solve these issues, and if underspecification was the sole issue for deployed machine learning systems degrading over time: we would have reduced the performance degradation by applying strong L1-regularisations to reduce "the feature selection bias",  hence the lower the effect of underspecification. Specially in deep learning models, underspecification shouldn't be an issue, due to representation learning deep learning models bring naturally, given the inputs covers the basic learning space. 





Saturday, 14 November 2020

Shannon's Entropy: Why it is called Entropy?

 

Ludwig Boltzmann
The story (legend) goes like this, von Neumann was asked by Shannon what he thinks and suggested Shannon call it entropy. 

Shannon's entropy is actually a toy version of Boltzmann's entropy. It is a toy version because it only considers configurational entropy of discrete objects without actually describing microstates. The more interesting connection where almost no-one knows is that actually, Birkhoff's ergodic theory has legitimised Shannon's entropy as his version of ergodicity is the toy version of Boltzmann. Well, Gibbs's contribution has a different angle and it is astonishing that why von Neumann omitted that is interesting.                                                                                                      

Shannon's entropy should be called von Neumann-Boltzmann-Shannon Entropy not only Shannon, maybe adding Birkhoff in the team. 


Cite as 

 @misc{suezen20sew, 
     title = {Shannon's Entropy: Why it is called Entropy? }, 
     howpublished = {\url{https://science-memo.blogspot.com/2020/11/shannons-entropy-why-it-is-called.html}, 
     author = {Mehmet Süzen},
     year = {2020}
}  


Postscripts

  • Ergodicity is an intricate subject: Boltzmann's and Birkoff's differing approaches.
  • Jaynes extensively studied the connection, and his interpretation was similar, he said von Neumann-Shannon expression being a " a more primitive concept" and using statistical mechanical ideas to bring in a mathematical tool for statistical inference. See his papers I and II


(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.