Showing posts with label evalutionary computation. Show all posts
Showing posts with label evalutionary computation. Show all posts

Sunday, 16 November 2025

Why the simplest explanation is always the best


Preamble

The simplest explanation is always the best among the explanations that are representative.  It is called the principle of parsimony,  Occam's razor. It is the bedrock of scientific enlightenment. There is a recent development that people start to do a category error to discard this principle based on the following setting that lead to misunderstanding: if we say that the simplest model that can explain the data in one representation but another representation requires a more complex model, then we are choosing more complex model. This is obviously wrongThe simplest explanation is to be chosen from the models that captures the complexity on the given representations, not the simplest over both representations. Filter explanations first based on representations then selection follows. Here we shown the core idea via an illustrative example. 

Figure: Circle has a zero Pearson
correlation. (Wikipedia)

Revisiting Occam’s Razor: A case of correlation and geometry


In order to understand this category error, we will work on a concrete example. Let's say we have a data, which is a circular shape $\mathscr{D}(x, y)$, as in figure. We have 3 models: 

$$\mathscr{M}_{1} : y = a x + b $$
$$\mathscr{M}_{2} : y = \sqrt{1-x^{2}} $$
$$\mathscr{M}_{3} : y ~ NN(x)  $$

And $\mathscr{U}$ a utility function, Pearson correlation $C(x, y)$.  We consider performance measure in ranking for the utility and a measure of representation. $\mathscr{M}_{3}$ is a neural network with a lot of parameters.

There is no error in choosing $\mathscr{M}_{1}$ based on the similar correlations these models produce. Principle of parsimony is not violated at all.  This is correct if we are only considering numerical representation. Pearson correlation as a utility works well for purely numerical representation. What about geometric representation? Then we need to change our utility function (representation measure or performance function). 

Let's say if we use curvature as a utility, $\kappa(x,y)$, in this case $\mathscr{M}_{1}$ fails to capture curvature and its is filtered out before Occam's razor can be applied. Then we left with  $\mathscr{M}_{2}$ and $\mathscr{M}_{3}$ . 

Correlation and geometric explanations are two different things. Two vastly different geometries can produce the same correlations.  A model can be quite good in explaining correlation but fails to capture geometric complexity. In this setting, it does not mean that Occam's razor is wrong. We need to apply Occam's razor across representations: numeric, geometric, algebraic, or symbolic, depending on the purpose. Always keep in mind the purpose or utility of model when invoking Occam's razor. Simplest explanation over the required representations that are relevant are the best. 

On the utility, performance and representations measure

Performance function, representation measure and utility functions can be different in real life, here for illustration purposes we consider them interchangeably. 

Conclusion

Nature minimises cost over complexity but under utility constraints. Minimal cost without satisfying utility or representation measure won't be chosen despite being the simplest. We need to filter first based on utility or representation measure before applying Occam's razor. The simplest explanation is always the best among the explanations that are representative. 



 Cite as 

 @misc{suzen25occam, 
     title = { Why the simplest explanation is always the best}, 
     howpublished = {\url{https://science-memo.blogspot.com/2025/11/simplest-explanation-always-best.html}}, 
     author = {Mehmet Süzen},
     year = {2025}
}  



Saturday, 20 March 2021

Computable function analogs of natural learning and intelligence may not exist


Optimal learning : Meta-optimization

Many papers directly equate “machine” learning problem, algorithmic learning oppose to human or animal learning, with optimisation problem. Unfortunately, contrary to common belief  machine learning is not an optimisation problem. For example, take optimal learning strategy, a replace learning with optimisation and we end up having and absurd terms of optimal optimisation strategy at one point. 

Turing machine (Wikipedia)
Sound like practiced machine learning is a meta-optimisation problem, rather than a learning as humans do.

Computable functions to learning

Fundamentally, we do not know how human learning can be mapped into an algorithm or if there are computable function analogs of human learning or if human intelligence and its artificial analog can be represented as Turing computable manner.

Sunday, 27 November 2011

Biologically Inspired Algorithms for Financial Modelling

Biologically inspired algorithms (BIA) has a special place in heuristic modeling in many different fields of computational sciences, for example in materials simulations, AI, operational research among many others. A recent book covers the area of Financial Modeling using BIAs. Chapters about corporate failure prediction using grammatical evolution and ant colony are quite interesting. I think topic of failure prediction is closely related to resilience of a financial institution, see my earlier blog entry.
(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.