その程度の問題なら機械学習まで使う必要はなく、最小二乗法という二百年くらい前からある統計学のごく基本的な方法でもけっこうできます。
python
1>>> import statsmodels.api as sm
2>>> import numpy as np
3>>> x = np.linspace(-5, 5, 100)
4>>> y1 = 3*x + 10 + np.random.normal(size=100)
5>>> y2 = 10*x**2 - 5 + np.random.normal(size=100)
6>>> y3 = 2*x**3 - 15 + np.random.normal(size=100)
7>>> X = np.vstack([x, x**2, x**3]).T
8>>> X = sm.add_constant(X)
9>>> res = sm.OLS(y1, X).fit()
10>>> print(res.summary())
11 OLS Regression Results
12==============================================================================
13Dep. Variable: y R-squared: 0.985
14Model: OLS Adj. R-squared: 0.984
15Method: Least Squares F-statistic: 2077.
16Date: Wed, 18 Dec 2019 Prob (F-statistic): 3.87e-87
17Time: 19:13:24 Log-Likelihood: -150.03
18No. Observations: 100 AIC: 308.1
19Df Residuals: 96 BIC: 318.5
20Df Model: 3
21Covariance Type: nonrobust
22==============================================================================
23 coef std err t P>|t| [0.025 0.975]
24------------------------------------------------------------------------------
25const 9.8967 0.166 59.586 0.000 9.567 10.226
26x1 2.9309 0.095 30.864 0.000 2.742 3.119
27x2 0.0047 0.015 0.321 0.749 -0.024 0.034
28x3 0.0043 0.006 0.759 0.450 -0.007 0.016
29==============================================================================
30Omnibus: 1.790 Durbin-Watson: 2.196
31Prob(Omnibus): 0.409 Jarque-Bera (JB): 1.692
32Skew: -0.314 Prob(JB): 0.429
33Kurtosis: 2.886 Cond. No. 73.3
34==============================================================================
35
36Warnings:
37[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
38>>> res = sm.OLS(y2, X).fit()
39>>> print(res.summary())
40 OLS Regression Results
41==============================================================================
42Dep. Variable: y R-squared: 1.000
43Model: OLS Adj. R-squared: 1.000
44Method: Least Squares F-statistic: 1.524e+05
45Date: Wed, 18 Dec 2019 Prob (F-statistic): 2.28e-176
46Time: 19:15:05 Log-Likelihood: -151.74
47No. Observations: 100 AIC: 311.5
48Df Residuals: 96 BIC: 321.9
49Df Model: 3
50Covariance Type: nonrobust
51==============================================================================
52 coef std err t P>|t| [0.025 0.975]
53------------------------------------------------------------------------------
54const -5.0743 0.169 -30.034 0.000 -5.410 -4.739
55x1 0.0078 0.097 0.081 0.936 -0.184 0.200
56x2 10.0157 0.015 676.141 0.000 9.986 10.045
57x3 -0.0015 0.006 -0.264 0.793 -0.013 0.010
58==============================================================================
59Omnibus: 1.737 Durbin-Watson: 1.905
60Prob(Omnibus): 0.420 Jarque-Bera (JB): 1.769
61Skew: 0.270 Prob(JB): 0.413
62Kurtosis: 2.634 Cond. No. 73.3
63==============================================================================
64
65Warnings:
66[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
67>>> res = sm.OLS(y3, X).fit()
68>>> print(res.summary())
69 OLS Regression Results
70==============================================================================
71Dep. Variable: y R-squared: 1.000
72Model: OLS Adj. R-squared: 1.000
73Method: Least Squares F-statistic: 3.375e+05
74Date: Wed, 18 Dec 2019 Prob (F-statistic): 6.13e-193
75Time: 19:15:25 Log-Likelihood: -136.43
76No. Observations: 100 AIC: 280.9
77Df Residuals: 96 BIC: 291.3
78Df Model: 3
79Covariance Type: nonrobust
80==============================================================================
81 coef std err t P>|t| [0.025 0.975]
82------------------------------------------------------------------------------
83const -15.2334 0.145 -105.081 0.000 -15.521 -14.946
84x1 0.0036 0.083 0.044 0.965 -0.161 0.168
85x2 0.0209 0.013 1.642 0.104 -0.004 0.046
86x3 1.9974 0.005 402.283 0.000 1.988 2.007
87==============================================================================
88Omnibus: 0.071 Durbin-Watson: 2.259
89Prob(Omnibus): 0.965 Jarque-Bera (JB): 0.117
90Skew: -0.060 Prob(JB): 0.943
91Kurtosis: 2.883 Cond. No. 73.3
92==============================================================================
93
94Warnings:
95[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
96
「1,2,3次関数のどれか」とか、せいぜい「それらの足し合わせにノイズを足したもの」であることがわかっていれば、変数の分布における仮定からoptimalな方法が出てくるので(複雑だったり条件が悪かったりしてできないときもあるけど)、機械学習の出る幕は恐らくありません。こういう仮定をうまく置けないときにでしゃばってくるのが機械学習なのです。
機械学習をカジュアルに学んでみたいなら、この本。大雑把に全体像がつかめるので、入門向きです。
[第2版]Python機械学習プログラミング 達人データサイエンティストによる理論と実践 - インプレスブックス
ただし冬休みで終えるのはたぶんきつい。