Sunday 7 March 2021

Critical look on why deployed machine learning model performance degrade quickly

Illustration of William of Ockham 
(Wikipedia)
One of the major problems in using so called machine learning model, usually a supervised model, in so called deployment, meaning it will serve new data points which were not in the training or test set,  with great astonishment, modellers or data scientist observe that model's performance degrade quickly or it doesn't perform as good as test set performance. We earlier ruled out that underspecification would not be the main cause. Here we proposed that the primary reason of such performance degradation lies on the usage of hold out method in judging generalised performance solely.

Why model test performance does not reflect in deployment? Understanding overfitting

Major contributing factor is due to inaccurate meme of overfitting which actually meant overtraining and connecting overtraining erroneously to generalisation solely.  This was discussed earlier here as understanding overfitting. Overfitting is not about how good  is the function approximation compared to other subsets of the dataset of the same “model” works. Hence, the hold-out method (test/train) of measuring performances  does not  provide sufficient and necessary conditions to judge model’s generalisation ability: with this approach we can not detect overfitting (in Occam’s razor sense) and as well the deployment performance. 

How to mimic deployment performance?

This depends on the use case but the most promising approaches lies in adaptive analysis and detected distribution shifts and build models accordingly. However, the answer to this question is still an open research.

No comments:

Post a Comment

(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.