TensorFlow
TensorFlow 기초 16 - AutoMPG dataset으로 자동차 연비 예측 모델(표준화)
코딩탕탕
2022. 12. 2. 11:20
# AutoMPG dataset으로 자동차 연비 예측모델
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from keras import layers
dataset = pd.read_csv('../testdata/auto-mpg.csv', na_values='?') # na_values='?' na를 ?표로 대체함
print(dataset.head(2))
del dataset['car name']
print(dataset.corr())
dataset.drop(['acceleration', 'model year', 'origin'], axis='columns', inplace=True)
print(dataset.info())
print(dataset.corr())
# print(dataset.isna().sum()) # 6개 있다.
dataset = dataset.dropna()
# print(dataset.isna().sum()) # 없애기 성공
# 시각화
# sns.pairplot(dataset[['mpg','displacement', 'horsepower', 'weight', 'acceleration']], diag_kind='kde')
# plt.show()
# train / test split
train_dataset = dataset.sample(frac=0.7, random_state=123)
test_dataset = dataset.drop(train_dataset.index)
print(train_dataset.shape, test_dataset.shape) # (274, 8) (118, 8)
# 표준화
train_stat = train_dataset.describe()
# print(train_stat)
train_stat.pop('mpg') # mpg는 label로 사용할 것이므로..
train_stat = train_stat.transpose()
print(train_stat)
train_label = train_dataset.pop('mpg') # mpg만 따로 뽑아봄
print(train_label[:2])
test_label = test_dataset.pop('mpg') # mpg만 따로 뽑아봄
print(test_label[:2])
def st_func(x): # 표준화 함수
return (x - train_stat['mean']) / train_stat['std']
# print(st_func(10))
# print(train_dataset[:2])
# print(st_func(train_dataset[:2]))
st_train_data = st_func(train_dataset) # feature
st_test_data = st_func(test_dataset) # feature
print(st_train_data.columns)
print()
# model
from keras.models import Sequential
from keras.layers import Dense
def build_model():
network = Sequential([
Dense(units=64, activation='relu', input_shape=[4]),
Dense(units=64, activation='relu'),
Dense(units=1, activation='linear')
])
opti= tf.keras.optimizers.RMSprop(0.001)
network.compile(optimizer=opti, loss='mean_squared_error', metrics=['mean_absolute_error', 'mean_squared_error'])
return network
model = build_model()
print(model.summary())
# fit() 전에 모델을 실행해도 됨. 다만, 성능은 기대하지 않음.
# print(model.predict(st_train_data[:1]))
epochs = 10000
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='auto', patience=5)
history = model.fit(st_train_data, train_label, batch_size=32, epochs=epochs,
validation_split=0.2, verbose=1, callbacks=[early_stop])
df = pd.DataFrame(history.history)
print(df)
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure(figsize=(8,12))
plt.subplot(2,1,1)
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],label = 'Val Error')
plt.legend()
plt.subplot(2,1,2)
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],label = 'Val Error')
plt.legend()
plt.show()
plot_history(history)
# 모델 평가
loss, mae, mse = model.evaluate(st_test_data, test_label)
print('loss : {:5.3f}'.format(loss))
print('mae : {:5.3f}'.format(mae))
print('mse : {:5.3f}'.format(mse))
from sklearn.metrics import r2_score
print('설명력 : ', r2_score(test_label, model.predict(st_test_data))) # 0.6687
<console>
mpg cylinders displacement ... model year origin car name
0 18.0 8 307.0 ... 70 1 chevrolet chevelle malibu
1 15.0 8 350.0 ... 70 1 buick skylark 320
[2 rows x 9 columns]
mpg cylinders ... model year origin
mpg 1.000000 -0.775396 ... 0.579267 0.563450
cylinders -0.775396 1.000000 ... -0.348746 -0.562543
displacement -0.804203 0.950721 ... -0.370164 -0.609409
horsepower -0.778427 0.842983 ... -0.416361 -0.455171
weight -0.831741 0.896017 ... -0.306564 -0.581024
acceleration 0.420289 -0.505419 ... 0.288137 0.205873
model year 0.579267 -0.348746 ... 1.000000 0.180662
origin 0.563450 -0.562543 ... 0.180662 1.000000
[8 rows x 8 columns]
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 398 entries, 0 to 397
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 398 non-null float64
1 cylinders 398 non-null int64
2 displacement 398 non-null float64
3 horsepower 392 non-null float64
4 weight 398 non-null int64
dtypes: float64(3), int64(2)
memory usage: 15.7 KB
None
mpg cylinders displacement horsepower weight
mpg 1.000000 -0.775396 -0.804203 -0.778427 -0.831741
cylinders -0.775396 1.000000 0.950721 0.842983 0.896017
displacement -0.804203 0.950721 1.000000 0.897257 0.932824
horsepower -0.778427 0.842983 0.897257 1.000000 0.864538
weight -0.831741 0.896017 0.932824 0.864538 1.000000
(274, 5) (118, 5)
count mean std ... 50% 75% max
cylinders 274.0 5.503650 1.720908 ... 4.0 8.00 8.0
displacement 274.0 196.131387 106.618440 ... 151.0 302.00 455.0
horsepower 274.0 104.755474 39.416747 ... 94.0 129.00 230.0
weight 274.0 2981.941606 863.904789 ... 2831.5 3641.75 4997.0
[4 rows x 8 columns]
222 17.0
247 39.4
Name: mpg, dtype: float64
1 15.0
2 18.0
Name: mpg, dtype: float64
Index(['cylinders', 'displacement', 'horsepower', 'weight'], dtype='object')
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 320
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 1) 65
=================================================================
Total params: 4,545
Trainable params: 4,545
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/10000
1/7 [===>..........................] - ETA: 1s - loss: 680.4760 - mean_absolute_error: 24.7800 - mean_squared_error: 680.4760
7/7 [==============================] - 0s 21ms/step - loss: 601.4277 - mean_absolute_error: 23.1848 - mean_squared_error: 601.4277 - val_loss: 591.8640 - val_mean_absolute_error: 23.0248 - val_mean_squared_error: 591.8640
Epoch 2/10000
1/7 [===>..........................] - ETA: 0s - loss: 680.3566 - mean_absolute_error: 24.8924 - mean_squared_error: 680.3566
7/7 [==============================] - 0s 3ms/step - loss: 568.0007 - mean_absolute_error: 22.4028 - mean_squared_error: 568.0007 - val_loss: 560.7090 - val_mean_absolute_error: 22.3425 - val_mean_squared_error: 560.7090
Epoch 3/10000
1/7 [===>..........................] - ETA: 0s - loss: 419.8855 - mean_absolute_error: 19.3354 - mean_squared_error: 419.8855
7/7 [==============================] - 0s 3ms/step - loss: 534.2181 - mean_absolute_error: 21.6125 - mean_squared_error: 534.2181 - val_loss: 524.8778 - val_mean_absolute_error: 21.5590 - val_mean_squared_error: 524.8778
Epoch 4/10000
1/7 [===>..........................] - ETA: 0s - loss: 507.0336 - mean_absolute_error: 20.8416 - mean_squared_error: 507.0336
7/7 [==============================] - 0s 3ms/step - loss: 495.0857 - mean_absolute_error: 20.6901 - mean_squared_error: 495.0857 - val_loss: 483.7122 - val_mean_absolute_error: 20.6297 - val_mean_squared_error: 483.7122
Epoch 5/10000
1/7 [===>..........................] - ETA: 0s - loss: 408.2064 - mean_absolute_error: 18.7368 - mean_squared_error: 408.2064
7/7 [==============================] - 0s 3ms/step - loss: 450.4423 - mean_absolute_error: 19.5857 - mean_squared_error: 450.4423 - val_loss: 437.2163 - val_mean_absolute_error: 19.5276 - val_mean_squared_error: 437.2163
Epoch 6/10000
1/7 [===>..........................] - ETA: 0s - loss: 313.1874 - mean_absolute_error: 16.2679 - mean_squared_error: 313.1874
7/7 [==============================] - 0s 3ms/step - loss: 401.6044 - mean_absolute_error: 18.2996 - mean_squared_error: 401.6044 - val_loss: 386.7092 - val_mean_absolute_error: 18.2572 - val_mean_squared_error: 386.7092
Epoch 7/10000
1/7 [===>..........................] - ETA: 0s - loss: 319.1680 - mean_absolute_error: 15.8723 - mean_squared_error: 319.1680
7/7 [==============================] - 0s 3ms/step - loss: 350.2697 - mean_absolute_error: 16.8654 - mean_squared_error: 350.2697 - val_loss: 335.5231 - val_mean_absolute_error: 16.8724 - val_mean_squared_error: 335.5231
Epoch 8/10000
1/7 [===>..........................] - ETA: 0s - loss: 275.3731 - mean_absolute_error: 14.4855 - mean_squared_error: 275.3731
7/7 [==============================] - 0s 3ms/step - loss: 300.0649 - mean_absolute_error: 15.4424 - mean_squared_error: 300.0649 - val_loss: 285.6091 - val_mean_absolute_error: 15.4003 - val_mean_squared_error: 285.6091
Epoch 9/10000
1/7 [===>..........................] - ETA: 0s - loss: 159.2849 - mean_absolute_error: 10.4762 - mean_squared_error: 159.2849
7/7 [==============================] - 0s 3ms/step - loss: 251.0454 - mean_absolute_error: 14.0058 - mean_squared_error: 251.0454 - val_loss: 236.3739 - val_mean_absolute_error: 13.8193 - val_mean_squared_error: 236.3739
Epoch 10/10000
1/7 [===>..........................] - ETA: 0s - loss: 204.9026 - mean_absolute_error: 12.6882 - mean_squared_error: 204.9026
7/7 [==============================] - 0s 3ms/step - loss: 205.1691 - mean_absolute_error: 12.6148 - mean_squared_error: 205.1691 - val_loss: 191.8347 - val_mean_absolute_error: 12.3074 - val_mean_squared_error: 191.8347
Epoch 11/10000
1/7 [===>..........................] - ETA: 0s - loss: 172.3840 - mean_absolute_error: 10.6897 - mean_squared_error: 172.3840
7/7 [==============================] - 0s 3ms/step - loss: 164.0300 - mean_absolute_error: 11.1987 - mean_squared_error: 164.0300 - val_loss: 152.4515 - val_mean_absolute_error: 10.8554 - val_mean_squared_error: 152.4515
Epoch 12/10000
1/7 [===>..........................] - ETA: 0s - loss: 172.1983 - mean_absolute_error: 11.7179 - mean_squared_error: 172.1983
7/7 [==============================] - 0s 3ms/step - loss: 129.0225 - mean_absolute_error: 9.8662 - mean_squared_error: 129.0225 - val_loss: 121.2074 - val_mean_absolute_error: 9.5956 - val_mean_squared_error: 121.2074
Epoch 13/10000
1/7 [===>..........................] - ETA: 0s - loss: 126.9507 - mean_absolute_error: 9.9745 - mean_squared_error: 126.9507
7/7 [==============================] - 0s 3ms/step - loss: 99.7987 - mean_absolute_error: 8.4456 - mean_squared_error: 99.7987 - val_loss: 95.8570 - val_mean_absolute_error: 8.4000 - val_mean_squared_error: 95.8570
Epoch 14/10000
1/7 [===>..........................] - ETA: 0s - loss: 61.4363 - mean_absolute_error: 6.7040 - mean_squared_error: 61.4363
7/7 [==============================] - 0s 3ms/step - loss: 77.5760 - mean_absolute_error: 7.2028 - mean_squared_error: 77.5760 - val_loss: 77.8085 - val_mean_absolute_error: 7.3244 - val_mean_squared_error: 77.8085
Epoch 15/10000
1/7 [===>..........................] - ETA: 0s - loss: 78.8430 - mean_absolute_error: 6.5619 - mean_squared_error: 78.8430
7/7 [==============================] - 0s 3ms/step - loss: 62.9044 - mean_absolute_error: 6.2160 - mean_squared_error: 62.9044 - val_loss: 66.7750 - val_mean_absolute_error: 6.5801 - val_mean_squared_error: 66.7750
Epoch 16/10000
1/7 [===>..........................] - ETA: 0s - loss: 60.0901 - mean_absolute_error: 6.2861 - mean_squared_error: 60.0901
7/7 [==============================] - 0s 3ms/step - loss: 54.6862 - mean_absolute_error: 5.7036 - mean_squared_error: 54.6862 - val_loss: 61.4330 - val_mean_absolute_error: 6.2815 - val_mean_squared_error: 61.4330
Epoch 17/10000
1/7 [===>..........................] - ETA: 0s - loss: 60.5466 - mean_absolute_error: 6.7953 - mean_squared_error: 60.5466
7/7 [==============================] - 0s 3ms/step - loss: 50.6541 - mean_absolute_error: 5.4266 - mean_squared_error: 50.6541 - val_loss: 57.5465 - val_mean_absolute_error: 6.0370 - val_mean_squared_error: 57.5465
Epoch 18/10000
1/7 [===>..........................] - ETA: 0s - loss: 56.2741 - mean_absolute_error: 6.0906 - mean_squared_error: 56.2741
7/7 [==============================] - 0s 3ms/step - loss: 48.0193 - mean_absolute_error: 5.3197 - mean_squared_error: 48.0193 - val_loss: 54.9215 - val_mean_absolute_error: 5.8894 - val_mean_squared_error: 54.9215
Epoch 19/10000
1/7 [===>..........................] - ETA: 0s - loss: 51.2549 - mean_absolute_error: 5.8761 - mean_squared_error: 51.2549
7/7 [==============================] - 0s 3ms/step - loss: 45.9287 - mean_absolute_error: 5.2135 - mean_squared_error: 45.9287 - val_loss: 51.5483 - val_mean_absolute_error: 5.6825 - val_mean_squared_error: 51.5483
Epoch 20/10000
1/7 [===>..........................] - ETA: 0s - loss: 50.5600 - mean_absolute_error: 5.6437 - mean_squared_error: 50.5600
7/7 [==============================] - 0s 3ms/step - loss: 42.8736 - mean_absolute_error: 5.0280 - mean_squared_error: 42.8736 - val_loss: 48.2185 - val_mean_absolute_error: 5.4748 - val_mean_squared_error: 48.2185
Epoch 21/10000
1/7 [===>..........................] - ETA: 0s - loss: 34.4499 - mean_absolute_error: 4.3802 - mean_squared_error: 34.4499
7/7 [==============================] - 0s 3ms/step - loss: 40.3116 - mean_absolute_error: 4.9054 - mean_squared_error: 40.3116 - val_loss: 45.2562 - val_mean_absolute_error: 5.2730 - val_mean_squared_error: 45.2562
Epoch 22/10000
1/7 [===>..........................] - ETA: 0s - loss: 61.7068 - mean_absolute_error: 5.7318 - mean_squared_error: 61.7068
7/7 [==============================] - 0s 3ms/step - loss: 37.9759 - mean_absolute_error: 4.7480 - mean_squared_error: 37.9759 - val_loss: 41.9157 - val_mean_absolute_error: 5.0596 - val_mean_squared_error: 41.9157
Epoch 23/10000
1/7 [===>..........................] - ETA: 0s - loss: 31.6613 - mean_absolute_error: 4.6477 - mean_squared_error: 31.6613
7/7 [==============================] - 0s 3ms/step - loss: 35.2623 - mean_absolute_error: 4.6154 - mean_squared_error: 35.2623 - val_loss: 39.8304 - val_mean_absolute_error: 4.9478 - val_mean_squared_error: 39.8304
Epoch 24/10000
1/7 [===>..........................] - ETA: 0s - loss: 33.2401 - mean_absolute_error: 4.5983 - mean_squared_error: 33.2401
7/7 [==============================] - 0s 3ms/step - loss: 33.1273 - mean_absolute_error: 4.4238 - mean_squared_error: 33.1273 - val_loss: 36.5776 - val_mean_absolute_error: 4.7240 - val_mean_squared_error: 36.5776
Epoch 25/10000
1/7 [===>..........................] - ETA: 0s - loss: 31.0012 - mean_absolute_error: 4.6573 - mean_squared_error: 31.0012
7/7 [==============================] - 0s 3ms/step - loss: 30.7996 - mean_absolute_error: 4.3089 - mean_squared_error: 30.7996 - val_loss: 34.0954 - val_mean_absolute_error: 4.5112 - val_mean_squared_error: 34.0954
Epoch 26/10000
1/7 [===>..........................] - ETA: 0s - loss: 25.0821 - mean_absolute_error: 3.9926 - mean_squared_error: 25.0821
7/7 [==============================] - 0s 3ms/step - loss: 28.6433 - mean_absolute_error: 4.1386 - mean_squared_error: 28.6433 - val_loss: 31.3404 - val_mean_absolute_error: 4.2903 - val_mean_squared_error: 31.3404
Epoch 27/10000
1/7 [===>..........................] - ETA: 0s - loss: 19.4081 - mean_absolute_error: 3.5444 - mean_squared_error: 19.4081
7/7 [==============================] - 0s 3ms/step - loss: 26.9365 - mean_absolute_error: 4.0184 - mean_squared_error: 26.9365 - val_loss: 29.6591 - val_mean_absolute_error: 4.1595 - val_mean_squared_error: 29.6591
Epoch 28/10000
1/7 [===>..........................] - ETA: 0s - loss: 16.1090 - mean_absolute_error: 3.3709 - mean_squared_error: 16.1090
7/7 [==============================] - 0s 3ms/step - loss: 24.9158 - mean_absolute_error: 3.8511 - mean_squared_error: 24.9158 - val_loss: 27.4577 - val_mean_absolute_error: 3.9322 - val_mean_squared_error: 27.4577
Epoch 29/10000
1/7 [===>..........................] - ETA: 0s - loss: 34.0473 - mean_absolute_error: 4.1305 - mean_squared_error: 34.0473
7/7 [==============================] - 0s 3ms/step - loss: 23.2951 - mean_absolute_error: 3.6899 - mean_squared_error: 23.2951 - val_loss: 25.9998 - val_mean_absolute_error: 3.7651 - val_mean_squared_error: 25.9998
Epoch 30/10000
1/7 [===>..........................] - ETA: 0s - loss: 31.2338 - mean_absolute_error: 3.9500 - mean_squared_error: 31.2338
7/7 [==============================] - 0s 3ms/step - loss: 22.1446 - mean_absolute_error: 3.5575 - mean_squared_error: 22.1446 - val_loss: 24.6765 - val_mean_absolute_error: 3.6129 - val_mean_squared_error: 24.6765
Epoch 31/10000
1/7 [===>..........................] - ETA: 0s - loss: 15.6989 - mean_absolute_error: 3.0156 - mean_squared_error: 15.6989
7/7 [==============================] - 0s 3ms/step - loss: 20.6000 - mean_absolute_error: 3.4105 - mean_squared_error: 20.6000 - val_loss: 23.7181 - val_mean_absolute_error: 3.4574 - val_mean_squared_error: 23.7181
Epoch 32/10000
1/7 [===>..........................] - ETA: 0s - loss: 25.1083 - mean_absolute_error: 3.4224 - mean_squared_error: 25.1083
7/7 [==============================] - 0s 3ms/step - loss: 19.6939 - mean_absolute_error: 3.3010 - mean_squared_error: 19.6939 - val_loss: 22.7881 - val_mean_absolute_error: 3.3326 - val_mean_squared_error: 22.7881
Epoch 33/10000
1/7 [===>..........................] - ETA: 0s - loss: 14.9332 - mean_absolute_error: 3.1259 - mean_squared_error: 14.9332
7/7 [==============================] - 0s 2ms/step - loss: 18.9630 - mean_absolute_error: 3.1956 - mean_squared_error: 18.9630 - val_loss: 22.1572 - val_mean_absolute_error: 3.2348 - val_mean_squared_error: 22.1572
Epoch 34/10000
1/7 [===>..........................] - ETA: 0s - loss: 12.2150 - mean_absolute_error: 2.4114 - mean_squared_error: 12.2150
7/7 [==============================] - 0s 3ms/step - loss: 18.2997 - mean_absolute_error: 3.0920 - mean_squared_error: 18.2997 - val_loss: 21.2409 - val_mean_absolute_error: 3.1456 - val_mean_squared_error: 21.2409
Epoch 35/10000
1/7 [===>..........................] - ETA: 0s - loss: 11.4307 - mean_absolute_error: 2.7604 - mean_squared_error: 11.4307
7/7 [==============================] - 0s 3ms/step - loss: 17.8721 - mean_absolute_error: 3.1003 - mean_squared_error: 17.8721 - val_loss: 21.2036 - val_mean_absolute_error: 3.1207 - val_mean_squared_error: 21.2036
Epoch 36/10000
1/7 [===>..........................] - ETA: 0s - loss: 14.4703 - mean_absolute_error: 2.9405 - mean_squared_error: 14.4703
7/7 [==============================] - 0s 3ms/step - loss: 17.1524 - mean_absolute_error: 3.0119 - mean_squared_error: 17.1524 - val_loss: 21.4782 - val_mean_absolute_error: 3.1813 - val_mean_squared_error: 21.4782
Epoch 37/10000
1/7 [===>..........................] - ETA: 0s - loss: 13.6271 - mean_absolute_error: 2.9719 - mean_squared_error: 13.6271
7/7 [==============================] - 0s 3ms/step - loss: 17.1712 - mean_absolute_error: 2.9854 - mean_squared_error: 17.1712 - val_loss: 20.5797 - val_mean_absolute_error: 3.0848 - val_mean_squared_error: 20.5797
Epoch 38/10000
1/7 [===>..........................] - ETA: 0s - loss: 28.3013 - mean_absolute_error: 3.7912 - mean_squared_error: 28.3013
7/7 [==============================] - 0s 3ms/step - loss: 16.8312 - mean_absolute_error: 2.9674 - mean_squared_error: 16.8312 - val_loss: 20.3102 - val_mean_absolute_error: 3.0284 - val_mean_squared_error: 20.3102
Epoch 39/10000
1/7 [===>..........................] - ETA: 0s - loss: 14.7526 - mean_absolute_error: 3.1983 - mean_squared_error: 14.7526
7/7 [==============================] - 0s 3ms/step - loss: 16.8290 - mean_absolute_error: 2.9640 - mean_squared_error: 16.8290 - val_loss: 20.5047 - val_mean_absolute_error: 3.0471 - val_mean_squared_error: 20.5047
Epoch 40/10000
1/7 [===>..........................] - ETA: 0s - loss: 13.7946 - mean_absolute_error: 2.8730 - mean_squared_error: 13.7946
7/7 [==============================] - 0s 3ms/step - loss: 16.2755 - mean_absolute_error: 2.9209 - mean_squared_error: 16.2755 - val_loss: 20.2221 - val_mean_absolute_error: 3.0333 - val_mean_squared_error: 20.2221
Epoch 41/10000
1/7 [===>..........................] - ETA: 0s - loss: 15.1452 - mean_absolute_error: 2.7385 - mean_squared_error: 15.1452
7/7 [==============================] - 0s 3ms/step - loss: 15.8887 - mean_absolute_error: 2.8588 - mean_squared_error: 15.8887 - val_loss: 20.5933 - val_mean_absolute_error: 3.0518 - val_mean_squared_error: 20.5933
Epoch 42/10000
1/7 [===>..........................] - ETA: 0s - loss: 21.5418 - mean_absolute_error: 3.2976 - mean_squared_error: 21.5418
7/7 [==============================] - 0s 3ms/step - loss: 16.8122 - mean_absolute_error: 2.9391 - mean_squared_error: 16.8122 - val_loss: 19.9304 - val_mean_absolute_error: 2.9836 - val_mean_squared_error: 19.9304
Epoch 43/10000
1/7 [===>..........................] - ETA: 0s - loss: 13.5428 - mean_absolute_error: 2.8874 - mean_squared_error: 13.5428
7/7 [==============================] - 0s 3ms/step - loss: 15.9525 - mean_absolute_error: 2.8678 - mean_squared_error: 15.9525 - val_loss: 19.7977 - val_mean_absolute_error: 2.9853 - val_mean_squared_error: 19.7977
Epoch 44/10000
1/7 [===>..........................] - ETA: 0s - loss: 14.9289 - mean_absolute_error: 3.0643 - mean_squared_error: 14.9289
7/7 [==============================] - 0s 3ms/step - loss: 15.8257 - mean_absolute_error: 2.8693 - mean_squared_error: 15.8257 - val_loss: 20.1326 - val_mean_absolute_error: 3.0181 - val_mean_squared_error: 20.1326
Epoch 45/10000
1/7 [===>..........................] - ETA: 0s - loss: 9.5729 - mean_absolute_error: 2.5294 - mean_squared_error: 9.5729
7/7 [==============================] - 0s 3ms/step - loss: 16.0579 - mean_absolute_error: 2.8986 - mean_squared_error: 16.0579 - val_loss: 19.8492 - val_mean_absolute_error: 2.9720 - val_mean_squared_error: 19.8492
Epoch 46/10000
1/7 [===>..........................] - ETA: 0s - loss: 19.3899 - mean_absolute_error: 3.0403 - mean_squared_error: 19.3899
7/7 [==============================] - 0s 3ms/step - loss: 15.9067 - mean_absolute_error: 2.8749 - mean_squared_error: 15.9067 - val_loss: 19.9324 - val_mean_absolute_error: 2.9916 - val_mean_squared_error: 19.9324
Epoch 47/10000
1/7 [===>..........................] - ETA: 0s - loss: 13.0213 - mean_absolute_error: 2.6371 - mean_squared_error: 13.0213
7/7 [==============================] - 0s 3ms/step - loss: 15.7572 - mean_absolute_error: 2.8664 - mean_squared_error: 15.7572 - val_loss: 20.0052 - val_mean_absolute_error: 3.0032 - val_mean_squared_error: 20.0052
Epoch 48/10000
1/7 [===>..........................] - ETA: 0s - loss: 11.9728 - mean_absolute_error: 2.6327 - mean_squared_error: 11.9728
7/7 [==============================] - 0s 3ms/step - loss: 15.6571 - mean_absolute_error: 2.8415 - mean_squared_error: 15.6571 - val_loss: 19.6889 - val_mean_absolute_error: 2.9865 - val_mean_squared_error: 19.6889
Epoch 49/10000
1/7 [===>..........................] - ETA: 0s - loss: 22.8097 - mean_absolute_error: 3.4804 - mean_squared_error: 22.8097
7/7 [==============================] - 0s 3ms/step - loss: 15.5446 - mean_absolute_error: 2.8439 - mean_squared_error: 15.5446 - val_loss: 19.6736 - val_mean_absolute_error: 2.9819 - val_mean_squared_error: 19.6736
Epoch 50/10000
1/7 [===>..........................] - ETA: 0s - loss: 23.2874 - mean_absolute_error: 3.5320 - mean_squared_error: 23.2874
7/7 [==============================] - 0s 3ms/step - loss: 15.6644 - mean_absolute_error: 2.8380 - mean_squared_error: 15.6644 - val_loss: 19.5981 - val_mean_absolute_error: 2.9795 - val_mean_squared_error: 19.5981
Epoch 51/10000
1/7 [===>..........................] - ETA: 0s - loss: 12.2018 - mean_absolute_error: 2.8134 - mean_squared_error: 12.2018
7/7 [==============================] - 0s 3ms/step - loss: 16.1080 - mean_absolute_error: 2.8916 - mean_squared_error: 16.1080 - val_loss: 19.6137 - val_mean_absolute_error: 2.9753 - val_mean_squared_error: 19.6137
Epoch 52/10000
1/7 [===>..........................] - ETA: 0s - loss: 19.6972 - mean_absolute_error: 3.0819 - mean_squared_error: 19.6972
7/7 [==============================] - 0s 3ms/step - loss: 15.2194 - mean_absolute_error: 2.8007 - mean_squared_error: 15.2194 - val_loss: 19.6220 - val_mean_absolute_error: 2.9872 - val_mean_squared_error: 19.6220
Epoch 53/10000
1/7 [===>..........................] - ETA: 0s - loss: 15.3694 - mean_absolute_error: 2.7427 - mean_squared_error: 15.3694
7/7 [==============================] - 0s 3ms/step - loss: 15.3328 - mean_absolute_error: 2.8302 - mean_squared_error: 15.3328 - val_loss: 19.5572 - val_mean_absolute_error: 2.9706 - val_mean_squared_error: 19.5572
Epoch 54/10000
1/7 [===>..........................] - ETA: 0s - loss: 14.3996 - mean_absolute_error: 2.8174 - mean_squared_error: 14.3996
7/7 [==============================] - 0s 3ms/step - loss: 15.6872 - mean_absolute_error: 2.8493 - mean_squared_error: 15.6872 - val_loss: 19.5703 - val_mean_absolute_error: 2.9711 - val_mean_squared_error: 19.5703
Epoch 55/10000
1/7 [===>..........................] - ETA: 0s - loss: 16.6972 - mean_absolute_error: 3.0000 - mean_squared_error: 16.6972
7/7 [==============================] - 0s 2ms/step - loss: 15.3090 - mean_absolute_error: 2.8341 - mean_squared_error: 15.3090 - val_loss: 19.5212 - val_mean_absolute_error: 2.9674 - val_mean_squared_error: 19.5212
Epoch 56/10000
1/7 [===>..........................] - ETA: 0s - loss: 15.5535 - mean_absolute_error: 2.8565 - mean_squared_error: 15.5535
7/7 [==============================] - 0s 3ms/step - loss: 15.0543 - mean_absolute_error: 2.8025 - mean_squared_error: 15.0543 - val_loss: 19.4893 - val_mean_absolute_error: 2.9645 - val_mean_squared_error: 19.4893
Epoch 57/10000
1/7 [===>..........................] - ETA: 0s - loss: 12.4237 - mean_absolute_error: 2.7484 - mean_squared_error: 12.4237
7/7 [==============================] - 0s 3ms/step - loss: 15.0483 - mean_absolute_error: 2.8079 - mean_squared_error: 15.0483 - val_loss: 20.2153 - val_mean_absolute_error: 3.0010 - val_mean_squared_error: 20.2153
Epoch 58/10000
1/7 [===>..........................] - ETA: 0s - loss: 13.5162 - mean_absolute_error: 2.6402 - mean_squared_error: 13.5162
7/7 [==============================] - 0s 3ms/step - loss: 15.3872 - mean_absolute_error: 2.8026 - mean_squared_error: 15.3872 - val_loss: 19.6546 - val_mean_absolute_error: 2.9706 - val_mean_squared_error: 19.6546
Epoch 59/10000
1/7 [===>..........................] - ETA: 0s - loss: 15.3013 - mean_absolute_error: 2.8112 - mean_squared_error: 15.3013
7/7 [==============================] - 0s 3ms/step - loss: 15.3402 - mean_absolute_error: 2.7953 - mean_squared_error: 15.3402 - val_loss: 19.5386 - val_mean_absolute_error: 2.9649 - val_mean_squared_error: 19.5386
Epoch 60/10000
1/7 [===>..........................] - ETA: 0s - loss: 7.6489 - mean_absolute_error: 1.9015 - mean_squared_error: 7.6489
7/7 [==============================] - 0s 3ms/step - loss: 15.0646 - mean_absolute_error: 2.8211 - mean_squared_error: 15.0646 - val_loss: 19.6878 - val_mean_absolute_error: 2.9713 - val_mean_squared_error: 19.6878
Epoch 61/10000
1/7 [===>..........................] - ETA: 0s - loss: 13.3874 - mean_absolute_error: 2.9319 - mean_squared_error: 13.3874
7/7 [==============================] - 0s 3ms/step - loss: 15.0976 - mean_absolute_error: 2.8221 - mean_squared_error: 15.0976 - val_loss: 19.9646 - val_mean_absolute_error: 2.9790 - val_mean_squared_error: 19.9646
loss ... val_mean_squared_error
0 601.427673 ... 591.864014
1 568.000671 ... 560.709045
2 534.218140 ... 524.877808
3 495.085724 ... 483.712219
4 450.442322 ... 437.216309
.. ... ... ...
56 15.048346 ... 20.215349
57 15.387167 ... 19.654648
58 15.340160 ... 19.538551
59 15.064593 ... 19.687771
60 15.097616 ... 19.964642
[61 rows x 6 columns]
1/4 [======>.......................] - ETA: 0s - loss: 12.4303 - mean_absolute_error: 2.5080 - mean_squared_error: 12.4303
4/4 [==============================] - 0s 1ms/step - loss: 14.7585 - mean_absolute_error: 3.0017 - mean_squared_error: 14.7585
loss : 14.758
mae : 3.002
mse : 14.758
1/4 [======>.......................] - ETA: 0s
4/4 [==============================] - 0s 835us/step
설명력 : 0.7201497337303957
EarlyStopping() 안에 patience의 의미는 val_loss가 n번 이내에 떨어지지 않으면 조기 종료 하겠다는 의미이다.
train / test split을 함수를 사용하지 않고 직접 수식으로 사용하였다. 표준화 또한 수식을 사용하였다.
모델 설계를 함수로 만들었다. 재사용하기 편하다.
fit() 전에 모델을 실행해도 됨. 다만 성능은 기대하지 않는다.
조기 종료는 오버피팅을 방지할 수 있다.