TensorFlow
TensorFlow 기초 22 - zoo animal dataset으로 동물의 type을 7가지로 분류(다항분)
코딩탕탕
2022. 12. 6. 11:14
# zoo animal dataset으로 동물의 type을 분류
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
from keras.utils import to_categorical
xy = np.loadtxt('https://raw.githubusercontent.com/pykwon/python/master/testdata_utf8/zoo.csv', delimiter=',')
print(xy[0], xy.shape)
x_data = xy[:, 0:-1]
y_data = xy[:, -1]
print(x_data[0])
print(y_data[0], ' ', set(y_data)) # {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0}
# train / test 생략
# label은 one-hot 처리를 해야 한다.
# y_data = to_categorical(y_data)
# print(y_data[0])
model = Sequential()
model.add(Dense(32, input_dim=16, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(7, activation='softmax'))
print(model.summary())
# model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# loss='sparse_categorical_crossentropy' 하면 내부적으로 one-hot 처리를 해준다.
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_data, y_data, epochs=100, batch_size=10, validation_split=0.3, verbose=0)
print('evaluate :', model.evaluate(x_data, y_data, batch_size=10, verbose=0))
# loss, acc를 시각화
history_dict = history.history
loss = history_dict['loss']
val_loss = history_dict['val_loss']
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
import matplotlib.pyplot as plt
plt.plot(loss, 'b-', label='loss')
plt.plot(val_loss, 'r--', label='val_loss')
plt.xlabel('epochs')
plt.legend()
plt.show()
plt.plot(acc, 'b-', label='acc')
plt.plot(val_acc, 'r--', label='val_acc')
plt.xlabel('epochs')
plt.legend()
plt.show()
print()
# 예측
pred_data = x_data[:1]
print('예측값 :', np.argmax(model.predict(pred_data))) # argmax함수를 사용하면 확률값으로 나온다.
print()
# 여러 개 예측값
pred_datas = x_data[:5]
preds = [np.argmax(i) for i in model.predict(pred_datas)]
print('예측값 :', preds) # argmax함수를 사용하면 확률값으로 나온다.
print('실제값 :', y_data[:5])
<console>
[1. 0. 0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 4. 0. 0. 1. 0.] (101, 17)
[1. 0. 0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 4. 0. 0. 1.]
0.0 {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0}
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 32) 544
dense_1 (Dense) (None, 32) 1056
dense_2 (Dense) (None, 7) 231
=================================================================
Total params: 1,831
Trainable params: 1,831
Non-trainable params: 0
_________________________________________________________________
None
evaluate : [0.20276065170764923, 0.9405940771102905]
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 40ms/step
예측값 : 0
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 10ms/step
예측값 : [0, 0, 3, 0, 0]
실제값 : [0. 0. 3. 0. 0.]
다항분류일 경우에는 label one-hot 처리를 해야 한다.
직접 one-hot 처리를 하지 않아 loss='sparse_categorical_crossentropy' 로 작성하면 프로그램 내부적으로 one-hot 처리를 해준다.
argmax함수를 사용하면 확률값으로 나온다.