We show how any dataset of any modality (time-series, images, sound...) can be approximated by a well-behaved (continuous, differentiable...) scalar function with a single real-valued parameter. Building upon elementary concepts from chaos theory, we adopt a pedagogical approach demonstrating how to adjust this parameter in order to achieve arbitrary precision fit to all samples of the data. Targeting an audience of data scientists with a taste for the curious and unusual, the results presented here expand on previous similar observations regarding expressiveness power and generalization of machine learning models.
[1]
Robert M. May,et al.
Simple mathematical models with very complicated dynamics
,
1976,
Nature.
[2]
Jeff Johnson,et al.
Rethinking floating point for deep learning
,
2018,
ArXiv.
[3]
Gregory J. Chaitin.
How real are real numbers
,
2011
.
[4]
Samy Bengio,et al.
Understanding deep learning requires rethinking generalization
,
2016,
ICLR.
[5]
S. Piantadosi.
One parameter is always enough
,
2018,
AIP Advances.
[6]
Vitaly Shmatikov,et al.
Machine Learning Models that Remember Too Much
,
2017,
CCS.