TigerCow.Door

4_Overfitting_and_Underfitting


안녕하세요. 문범우입니다.

이번 포스팅에서 다뤄볼 내용은, 텐서플로우 튜토리얼의 4번째 overfitting and underfitting 입니다.


In [41]:
# TensorFlow and tf.keras
# 텐서플로우와 keras를 import한다. 이떄 tensorflow는 tf라는 별칭으로 사용할 것임.
import tensorflow as tf
from tensorflow import keras

# Helper libraries
# numpy와 matplotlib을 사용한다.
import numpy as np
import matplotlib.pyplot as plt
# jupyter notebook에서 matplotlib을 사용하기 위한 매직커맨드
%matplotlib inline

print("사용되는 tensorflow의 버전:",tf.__version__)
사용되는 tensorflow의 버전: 1.9.0

ㄱ. 데이터준비

이번에 사용될 데이터는 지난번 test classification에서 사용되었던 IMDB 영화 데이터입니다.

In [43]:
NUM_WORDS = 10000

(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
In [44]:
# 0번째 데이터의 값 확인해보기
print(train_data[0][0:5],". . .",train_data[0][-5:])
[1, 14, 22, 16, 43] . . . [16, 5345, 19, 178, 32]
In [45]:
def multi_hot_sequences(sequences, dimension):
    # Create an all-zero matrix of shape (len(sequences), dimension)
    # sequences의 길이만큼 행을 만들고, dimension만큼 열을 만든다.
    results = np.zeros((len(sequences), dimension))
    # sequence
    for i, word_indices in enumerate(sequences):
        # i번째의 데이터에 대해서 포함되어 있는 단어의 숫자값을 인덱스로 하여 1값으로 가져간다.
        results[i, word_indices] = 1.0  # set specific indices of results[i] to 1s
    return results


train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)

우리는 이번에 overfitting에 대해 공부해볼 예정입니다.

위의 multi_hot_sequences 함수는, 모델이 훈련데이터셋에 대해서 보다 빨리 overfitting이 되도록 합니다.

In [46]:
# 0번째 데이터의 값 확인해보기
print(train_data[0][0:5],". . .",train_data[0][-5:])
[0. 1. 1. 0. 1.] . . . [0. 0. 0. 0. 0.]
In [48]:
# 0번째 데이터 값을 그래프로 확인해보기
plt.plot(train_data[0])
Out[48]:
[<matplotlib.lines.Line2D at 0xb28f28550>]

ㄴ. 기준 모델 만들기

오버피팅을 확인해보기 위해 먼저 기준 모델을 만들어본다.

In [49]:
baseline_model = keras.Sequential([
    # `input_shape` is only required here so that `.summary` works. 
    keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dense(16, activation=tf.nn.relu),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
])

baseline_model.compile(optimizer='adam',
                       loss='binary_crossentropy',
                       metrics=['accuracy', 'binary_crossentropy'])

baseline_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 16)                160016    
_________________________________________________________________
dense_1 (Dense)              (None, 16)                272       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,305
Trainable params: 160,305
Non-trainable params: 0
_________________________________________________________________
In [50]:
# 기준 모델을 훈련해보고 측정해본다.
baseline_history = baseline_model.fit(train_data,
                                      train_labels,
                                      epochs=20,
                                      batch_size=512,
                                      validation_data=(test_data, test_labels),
                                      verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
 - 9s - loss: 0.4707 - acc: 0.8128 - binary_crossentropy: 0.4707 - val_loss: 0.3284 - val_acc: 0.8775 - val_binary_crossentropy: 0.3284
Epoch 2/20
 - 7s - loss: 0.2428 - acc: 0.9134 - binary_crossentropy: 0.2428 - val_loss: 0.2843 - val_acc: 0.8873 - val_binary_crossentropy: 0.2843
Epoch 3/20
 - 6s - loss: 0.1789 - acc: 0.9366 - binary_crossentropy: 0.1789 - val_loss: 0.2911 - val_acc: 0.8858 - val_binary_crossentropy: 0.2911
Epoch 4/20
 - 6s - loss: 0.1431 - acc: 0.9512 - binary_crossentropy: 0.1431 - val_loss: 0.3177 - val_acc: 0.8778 - val_binary_crossentropy: 0.3177
Epoch 5/20
 - 7s - loss: 0.1188 - acc: 0.9606 - binary_crossentropy: 0.1188 - val_loss: 0.3433 - val_acc: 0.8732 - val_binary_crossentropy: 0.3433
Epoch 6/20
 - 8s - loss: 0.0975 - acc: 0.9696 - binary_crossentropy: 0.0975 - val_loss: 0.3757 - val_acc: 0.8680 - val_binary_crossentropy: 0.3757
Epoch 7/20
 - 7s - loss: 0.0786 - acc: 0.9776 - binary_crossentropy: 0.0786 - val_loss: 0.4244 - val_acc: 0.8616 - val_binary_crossentropy: 0.4244
Epoch 8/20
 - 8s - loss: 0.0623 - acc: 0.9832 - binary_crossentropy: 0.0623 - val_loss: 0.4540 - val_acc: 0.8631 - val_binary_crossentropy: 0.4540
Epoch 9/20
 - 7s - loss: 0.0478 - acc: 0.9895 - binary_crossentropy: 0.0478 - val_loss: 0.4929 - val_acc: 0.8604 - val_binary_crossentropy: 0.4929
Epoch 10/20
 - 8s - loss: 0.0356 - acc: 0.9938 - binary_crossentropy: 0.0356 - val_loss: 0.5390 - val_acc: 0.8580 - val_binary_crossentropy: 0.5390
Epoch 11/20
 - 6s - loss: 0.0261 - acc: 0.9962 - binary_crossentropy: 0.0261 - val_loss: 0.5758 - val_acc: 0.8578 - val_binary_crossentropy: 0.5758
Epoch 12/20
 - 4s - loss: 0.0186 - acc: 0.9983 - binary_crossentropy: 0.0186 - val_loss: 0.6208 - val_acc: 0.8558 - val_binary_crossentropy: 0.6208
Epoch 13/20
 - 6s - loss: 0.0127 - acc: 0.9989 - binary_crossentropy: 0.0127 - val_loss: 0.6513 - val_acc: 0.8558 - val_binary_crossentropy: 0.6513
Epoch 14/20
 - 7s - loss: 0.0090 - acc: 0.9997 - binary_crossentropy: 0.0090 - val_loss: 0.6821 - val_acc: 0.8548 - val_binary_crossentropy: 0.6821
Epoch 15/20
 - 7s - loss: 0.0065 - acc: 1.0000 - binary_crossentropy: 0.0065 - val_loss: 0.7090 - val_acc: 0.8548 - val_binary_crossentropy: 0.7090
Epoch 16/20
 - 6s - loss: 0.0050 - acc: 1.0000 - binary_crossentropy: 0.0050 - val_loss: 0.7358 - val_acc: 0.8552 - val_binary_crossentropy: 0.7358
Epoch 17/20
 - 7s - loss: 0.0040 - acc: 1.0000 - binary_crossentropy: 0.0040 - val_loss: 0.7585 - val_acc: 0.8551 - val_binary_crossentropy: 0.7585
Epoch 18/20
 - 4s - loss: 0.0032 - acc: 1.0000 - binary_crossentropy: 0.0032 - val_loss: 0.7811 - val_acc: 0.8550 - val_binary_crossentropy: 0.7811
Epoch 19/20
 - 4s - loss: 0.0026 - acc: 1.0000 - binary_crossentropy: 0.0026 - val_loss: 0.8007 - val_acc: 0.8552 - val_binary_crossentropy: 0.8007
Epoch 20/20
 - 4s - loss: 0.0022 - acc: 1.0000 - binary_crossentropy: 0.0022 - val_loss: 0.8192 - val_acc: 0.8548 - val_binary_crossentropy: 0.8192

이번에는 기준 모델보다 hidden units이 적은 모델을 만들어 본다.

In [52]:
smaller_model = keras.Sequential([
    keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dense(4, activation=tf.nn.relu),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
])

smaller_model.compile(optimizer='adam',
                loss='binary_crossentropy',
                metrics=['accuracy', 'binary_crossentropy'])

smaller_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_6 (Dense)              (None, 4)                 40004     
_________________________________________________________________
dense_7 (Dense)              (None, 4)                 20        
_________________________________________________________________
dense_8 (Dense)              (None, 1)                 5         
=================================================================
Total params: 40,029
Trainable params: 40,029
Non-trainable params: 0
_________________________________________________________________
In [53]:
smaller_history = smaller_model.fit(train_data,
                                    train_labels,
                                    epochs=20,
                                    batch_size=512,
                                    validation_data=(test_data, test_labels),
                                    verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
 - 5s - loss: 0.6317 - acc: 0.6360 - binary_crossentropy: 0.6317 - val_loss: 0.5714 - val_acc: 0.7279 - val_binary_crossentropy: 0.5714
Epoch 2/20
 - 6s - loss: 0.5187 - acc: 0.8149 - binary_crossentropy: 0.5187 - val_loss: 0.5109 - val_acc: 0.8128 - val_binary_crossentropy: 0.5109
Epoch 3/20
 - 6s - loss: 0.4623 - acc: 0.8725 - binary_crossentropy: 0.4623 - val_loss: 0.4787 - val_acc: 0.8519 - val_binary_crossentropy: 0.4787
Epoch 4/20
 - 7s - loss: 0.4248 - acc: 0.9022 - binary_crossentropy: 0.4248 - val_loss: 0.4587 - val_acc: 0.8713 - val_binary_crossentropy: 0.4587
Epoch 5/20
 - 5s - loss: 0.3962 - acc: 0.9184 - binary_crossentropy: 0.3962 - val_loss: 0.4449 - val_acc: 0.8781 - val_binary_crossentropy: 0.4449
Epoch 6/20
 - 4s - loss: 0.3721 - acc: 0.9321 - binary_crossentropy: 0.3721 - val_loss: 0.4394 - val_acc: 0.8686 - val_binary_crossentropy: 0.4394
Epoch 7/20
 - 5s - loss: 0.3505 - acc: 0.9414 - binary_crossentropy: 0.3505 - val_loss: 0.4345 - val_acc: 0.8696 - val_binary_crossentropy: 0.4345
Epoch 8/20
 - 7s - loss: 0.3317 - acc: 0.9494 - binary_crossentropy: 0.3317 - val_loss: 0.4253 - val_acc: 0.8758 - val_binary_crossentropy: 0.4253
Epoch 9/20
 - 7s - loss: 0.3147 - acc: 0.9567 - binary_crossentropy: 0.3147 - val_loss: 0.4255 - val_acc: 0.8738 - val_binary_crossentropy: 0.4255
Epoch 10/20
 - 7s - loss: 0.2993 - acc: 0.9617 - binary_crossentropy: 0.2993 - val_loss: 0.4202 - val_acc: 0.8758 - val_binary_crossentropy: 0.4202
Epoch 11/20
 - 7s - loss: 0.2854 - acc: 0.9659 - binary_crossentropy: 0.2854 - val_loss: 0.4210 - val_acc: 0.8738 - val_binary_crossentropy: 0.4210
Epoch 12/20
 - 6s - loss: 0.2714 - acc: 0.9697 - binary_crossentropy: 0.2714 - val_loss: 0.4225 - val_acc: 0.8729 - val_binary_crossentropy: 0.4225
Epoch 13/20
 - 4s - loss: 0.2589 - acc: 0.9732 - binary_crossentropy: 0.2589 - val_loss: 0.4269 - val_acc: 0.8699 - val_binary_crossentropy: 0.4269
Epoch 14/20
 - 4s - loss: 0.2474 - acc: 0.9754 - binary_crossentropy: 0.2474 - val_loss: 0.4230 - val_acc: 0.8698 - val_binary_crossentropy: 0.4230
Epoch 15/20
 - 4s - loss: 0.2368 - acc: 0.9781 - binary_crossentropy: 0.2368 - val_loss: 0.4355 - val_acc: 0.8676 - val_binary_crossentropy: 0.4355
Epoch 16/20
 - 4s - loss: 0.2266 - acc: 0.9802 - binary_crossentropy: 0.2266 - val_loss: 0.4397 - val_acc: 0.8671 - val_binary_crossentropy: 0.4397
Epoch 17/20
 - 3s - loss: 0.2175 - acc: 0.9816 - binary_crossentropy: 0.2175 - val_loss: 0.4456 - val_acc: 0.8663 - val_binary_crossentropy: 0.4456
Epoch 18/20
 - 4s - loss: 0.2084 - acc: 0.9832 - binary_crossentropy: 0.2084 - val_loss: 0.4333 - val_acc: 0.8686 - val_binary_crossentropy: 0.4333
Epoch 19/20
 - 3s - loss: 0.2002 - acc: 0.9843 - binary_crossentropy: 0.2002 - val_loss: 0.4555 - val_acc: 0.8657 - val_binary_crossentropy: 0.4555
Epoch 20/20
 - 3s - loss: 0.1927 - acc: 0.9848 - binary_crossentropy: 0.1927 - val_loss: 0.4654 - val_acc: 0.8643 - val_binary_crossentropy: 0.4654

이번에는 기준 모델보다 hidden units이 많은 모델을 만들어 본다.

In [54]:
bigger_model = keras.models.Sequential([
    keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dense(512, activation=tf.nn.relu),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
])

bigger_model.compile(optimizer='adam',
                     loss='binary_crossentropy',
                     metrics=['accuracy','binary_crossentropy'])

bigger_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_9 (Dense)              (None, 512)               5120512   
_________________________________________________________________
dense_10 (Dense)             (None, 512)               262656    
_________________________________________________________________
dense_11 (Dense)             (None, 1)                 513       
=================================================================
Total params: 5,383,681
Trainable params: 5,383,681
Non-trainable params: 0
_________________________________________________________________
In [55]:
bigger_history = bigger_model.fit(train_data, train_labels,
                                  epochs=20,
                                  batch_size=512,
                                  validation_data=(test_data, test_labels),
                                  verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
 - 18s - loss: 0.3478 - acc: 0.8466 - binary_crossentropy: 0.3478 - val_loss: 0.2992 - val_acc: 0.8776 - val_binary_crossentropy: 0.2992
Epoch 2/20
 - 18s - loss: 0.1441 - acc: 0.9471 - binary_crossentropy: 0.1441 - val_loss: 0.3556 - val_acc: 0.8651 - val_binary_crossentropy: 0.3556
Epoch 3/20
 - 19s - loss: 0.0532 - acc: 0.9839 - binary_crossentropy: 0.0532 - val_loss: 0.4296 - val_acc: 0.8650 - val_binary_crossentropy: 0.4296
Epoch 4/20
 - 18s - loss: 0.0100 - acc: 0.9985 - binary_crossentropy: 0.0100 - val_loss: 0.5852 - val_acc: 0.8694 - val_binary_crossentropy: 0.5852
Epoch 5/20
 - 19s - loss: 0.0011 - acc: 1.0000 - binary_crossentropy: 0.0011 - val_loss: 0.6643 - val_acc: 0.8680 - val_binary_crossentropy: 0.6643
Epoch 6/20
 - 20s - loss: 2.8284e-04 - acc: 1.0000 - binary_crossentropy: 2.8284e-04 - val_loss: 0.7065 - val_acc: 0.8680 - val_binary_crossentropy: 0.7065
Epoch 7/20
 - 20s - loss: 1.6760e-04 - acc: 1.0000 - binary_crossentropy: 1.6760e-04 - val_loss: 0.7332 - val_acc: 0.8684 - val_binary_crossentropy: 0.7332
Epoch 8/20
 - 20s - loss: 1.1922e-04 - acc: 1.0000 - binary_crossentropy: 1.1922e-04 - val_loss: 0.7526 - val_acc: 0.8683 - val_binary_crossentropy: 0.7526
Epoch 9/20
 - 20s - loss: 9.0721e-05 - acc: 1.0000 - binary_crossentropy: 9.0721e-05 - val_loss: 0.7692 - val_acc: 0.8683 - val_binary_crossentropy: 0.7692
Epoch 10/20
 - 19s - loss: 7.1760e-05 - acc: 1.0000 - binary_crossentropy: 7.1760e-05 - val_loss: 0.7820 - val_acc: 0.8682 - val_binary_crossentropy: 0.7820
Epoch 11/20
 - 23s - loss: 5.8391e-05 - acc: 1.0000 - binary_crossentropy: 5.8391e-05 - val_loss: 0.7941 - val_acc: 0.8682 - val_binary_crossentropy: 0.7941
Epoch 12/20
 - 22s - loss: 4.8347e-05 - acc: 1.0000 - binary_crossentropy: 4.8347e-05 - val_loss: 0.8046 - val_acc: 0.8684 - val_binary_crossentropy: 0.8046
Epoch 13/20
 - 21s - loss: 4.0705e-05 - acc: 1.0000 - binary_crossentropy: 4.0705e-05 - val_loss: 0.8139 - val_acc: 0.8682 - val_binary_crossentropy: 0.8139
Epoch 14/20
 - 19s - loss: 3.4762e-05 - acc: 1.0000 - binary_crossentropy: 3.4762e-05 - val_loss: 0.8229 - val_acc: 0.8681 - val_binary_crossentropy: 0.8229
Epoch 15/20
 - 18s - loss: 2.9985e-05 - acc: 1.0000 - binary_crossentropy: 2.9985e-05 - val_loss: 0.8312 - val_acc: 0.8682 - val_binary_crossentropy: 0.8312
Epoch 16/20
 - 19s - loss: 2.6114e-05 - acc: 1.0000 - binary_crossentropy: 2.6114e-05 - val_loss: 0.8379 - val_acc: 0.8681 - val_binary_crossentropy: 0.8379
Epoch 17/20
 - 19s - loss: 2.2936e-05 - acc: 1.0000 - binary_crossentropy: 2.2936e-05 - val_loss: 0.8461 - val_acc: 0.8687 - val_binary_crossentropy: 0.8461
Epoch 18/20
 - 19s - loss: 2.0306e-05 - acc: 1.0000 - binary_crossentropy: 2.0306e-05 - val_loss: 0.8517 - val_acc: 0.8683 - val_binary_crossentropy: 0.8517
Epoch 19/20
 - 19s - loss: 1.8037e-05 - acc: 1.0000 - binary_crossentropy: 1.8037e-05 - val_loss: 0.8579 - val_acc: 0.8683 - val_binary_crossentropy: 0.8579
Epoch 20/20
 - 19s - loss: 1.6142e-05 - acc: 1.0000 - binary_crossentropy: 1.6142e-05 - val_loss: 0.8643 - val_acc: 0.8686 - val_binary_crossentropy: 0.8643

이렇게 까지해서, baseline, small, bigger 총 세가지 모델을 만들고, 같은 데이터셋으로 훈련과 validation을 진행하였다.

그래프를 통해 오차율을 확인해보자.

In [56]:
def plot_history(histories, key='binary_crossentropy'):
  plt.figure(figsize=(16,10))
    
  for name, history in histories:
    val = plt.plot(history.epoch, history.history['val_'+key],
                   '--', label=name.title()+' Val')
    plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
             label=name.title()+' Train')

  plt.xlabel('Epochs')
  plt.ylabel(key.replace('_',' ').title())
  plt.legend()

  plt.xlim([0,max(history.epoch)])


plot_history([('baseline', baseline_history),
              ('smaller', smaller_history),
              ('bigger', bigger_history)])

위의 그래프를 확인해보면 train데이터를 통한 오차보다 validation데이터를 통한 오차가 어떤 모델에서든 크다는 것을 볼 수 있다.

특히나 bigger와 baseline 모델에서는 epoch이 늘어날 수록 validation의 오차가 크게 증가함을 볼 수 있다.

이렇게 testdataset에만 지나치게 적합되어있는 현상을 과적합, overfitting이라고 한다.

이러한 overfitting은 어떻게 해결할 수 있을까?

Overfitting 전략 - 정규화

튜토리얼 상의 내용을 보면, weight를 학습하는데 있어서 보다 적은 값을 이용하도록 한다.

그러한 과정을 '정규화'라고 하는데, L1 정규화와 L2 정규화가 존재한다.

각각에 대해서는 문서의 본문을 그대로 참고한다.

L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).

L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.

위와 같은 정규화를 케라스에서 사용하여 모델을 만들어본다.

In [57]:
l2_model = keras.models.Sequential([
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
                       activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
                       activation=tf.nn.relu),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
])

l2_model.compile(optimizer='adam',
                 loss='binary_crossentropy',
                 metrics=['accuracy', 'binary_crossentropy'])

l2_model_history = l2_model.fit(train_data, train_labels,
                                epochs=20,
                                batch_size=512,
                                validation_data=(test_data, test_labels),
                                verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
 - 10s - loss: 0.5400 - acc: 0.8033 - binary_crossentropy: 0.5021 - val_loss: 0.3943 - val_acc: 0.8726 - val_binary_crossentropy: 0.3546
Epoch 2/20
 - 7s - loss: 0.3134 - acc: 0.9049 - binary_crossentropy: 0.2687 - val_loss: 0.3354 - val_acc: 0.8869 - val_binary_crossentropy: 0.2869
Epoch 3/20
 - 5s - loss: 0.2578 - acc: 0.9270 - binary_crossentropy: 0.2066 - val_loss: 0.3366 - val_acc: 0.8860 - val_binary_crossentropy: 0.2833
Epoch 4/20
 - 4s - loss: 0.2316 - acc: 0.9386 - binary_crossentropy: 0.1765 - val_loss: 0.3484 - val_acc: 0.8836 - val_binary_crossentropy: 0.2921
Epoch 5/20
 - 6s - loss: 0.2178 - acc: 0.9463 - binary_crossentropy: 0.1598 - val_loss: 0.3613 - val_acc: 0.8794 - val_binary_crossentropy: 0.3022
Epoch 6/20
 - 7s - loss: 0.2043 - acc: 0.9511 - binary_crossentropy: 0.1445 - val_loss: 0.3762 - val_acc: 0.8767 - val_binary_crossentropy: 0.3158
Epoch 7/20
 - 5s - loss: 0.1969 - acc: 0.9543 - binary_crossentropy: 0.1354 - val_loss: 0.3911 - val_acc: 0.8725 - val_binary_crossentropy: 0.3287
Epoch 8/20
 - 4s - loss: 0.1882 - acc: 0.9582 - binary_crossentropy: 0.1251 - val_loss: 0.4013 - val_acc: 0.8716 - val_binary_crossentropy: 0.3379
Epoch 9/20
 - 5s - loss: 0.1819 - acc: 0.9599 - binary_crossentropy: 0.1178 - val_loss: 0.4190 - val_acc: 0.8702 - val_binary_crossentropy: 0.3543
Epoch 10/20
 - 7s - loss: 0.1795 - acc: 0.9620 - binary_crossentropy: 0.1141 - val_loss: 0.4350 - val_acc: 0.8670 - val_binary_crossentropy: 0.3691
Epoch 11/20
 - 7s - loss: 0.1736 - acc: 0.9633 - binary_crossentropy: 0.1071 - val_loss: 0.4417 - val_acc: 0.8660 - val_binary_crossentropy: 0.3746
Epoch 12/20
 - 6s - loss: 0.1699 - acc: 0.9648 - binary_crossentropy: 0.1028 - val_loss: 0.4632 - val_acc: 0.8618 - val_binary_crossentropy: 0.3955
Epoch 13/20
 - 4s - loss: 0.1688 - acc: 0.9660 - binary_crossentropy: 0.1002 - val_loss: 0.4665 - val_acc: 0.8632 - val_binary_crossentropy: 0.3975
Epoch 14/20
 - 4s - loss: 0.1588 - acc: 0.9710 - binary_crossentropy: 0.0899 - val_loss: 0.4748 - val_acc: 0.8610 - val_binary_crossentropy: 0.4062
Epoch 15/20
 - 3s - loss: 0.1523 - acc: 0.9744 - binary_crossentropy: 0.0838 - val_loss: 0.4883 - val_acc: 0.8620 - val_binary_crossentropy: 0.4196
Epoch 16/20
 - 4s - loss: 0.1498 - acc: 0.9744 - binary_crossentropy: 0.0809 - val_loss: 0.5009 - val_acc: 0.8597 - val_binary_crossentropy: 0.4318
Epoch 17/20
 - 5s - loss: 0.1474 - acc: 0.9760 - binary_crossentropy: 0.0782 - val_loss: 0.5079 - val_acc: 0.8590 - val_binary_crossentropy: 0.4383
Epoch 18/20
 - 4s - loss: 0.1455 - acc: 0.9756 - binary_crossentropy: 0.0756 - val_loss: 0.5240 - val_acc: 0.8574 - val_binary_crossentropy: 0.4537
Epoch 19/20
 - 4s - loss: 0.1423 - acc: 0.9772 - binary_crossentropy: 0.0719 - val_loss: 0.5285 - val_acc: 0.8601 - val_binary_crossentropy: 0.4580
Epoch 20/20
 - 3s - loss: 0.1401 - acc: 0.9790 - binary_crossentropy: 0.0693 - val_loss: 0.5415 - val_acc: 0.8562 - val_binary_crossentropy: 0.4702

L2 정규화를 사용하는 l2_model을 구성하였다.

앞의 2개의 레이어에 정규화를 진행시켰는데, kernel_regularizer에 사용할 정규화를 케라스 라이브러리를 통해 입력해준다. 이때 뒤에 소괄호를 통해 넣어주는 값은 정규화의 세기를 의미한다.

l2_model을 그래프로 살펴보고 기준 모델과 비교해본다.

In [58]:
plot_history([('baseline', baseline_history),
              ('l2', l2_model_history)])

Overfitting 전략 - Drop out

이번에 알아볼 Overfitting 전략은 Drop out 방법이다.

이는 쉽게 이야기해서, 학습할 때 모든 노드들이 일하는 것이 아니고, 랜덤하게 특정 노드들은 학습을 하지 않도록 하는 것이다.

이번에도 바로 드랍아웃을 적용한 모델을 만들어 본다.

In [59]:
dpt_model = keras.models.Sequential([
    keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(16, activation=tf.nn.relu),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
])

dpt_model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy','binary_crossentropy'])

dpt_model_history = dpt_model.fit(train_data, train_labels,
                                  epochs=20,
                                  batch_size=512,
                                  validation_data=(test_data, test_labels),
                                  verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
 - 9s - loss: 0.6336 - acc: 0.6349 - binary_crossentropy: 0.6336 - val_loss: 0.5118 - val_acc: 0.8349 - val_binary_crossentropy: 0.5118
Epoch 2/20
 - 7s - loss: 0.4848 - acc: 0.7913 - binary_crossentropy: 0.4848 - val_loss: 0.3582 - val_acc: 0.8785 - val_binary_crossentropy: 0.3582
Epoch 3/20
 - 5s - loss: 0.3747 - acc: 0.8632 - binary_crossentropy: 0.3747 - val_loss: 0.3021 - val_acc: 0.8882 - val_binary_crossentropy: 0.3021
Epoch 4/20
 - 4s - loss: 0.2978 - acc: 0.8969 - binary_crossentropy: 0.2978 - val_loss: 0.2791 - val_acc: 0.8865 - val_binary_crossentropy: 0.2791
Epoch 5/20
 - 5s - loss: 0.2509 - acc: 0.9136 - binary_crossentropy: 0.2509 - val_loss: 0.2811 - val_acc: 0.8858 - val_binary_crossentropy: 0.2811
Epoch 6/20
 - 7s - loss: 0.2168 - acc: 0.9277 - binary_crossentropy: 0.2168 - val_loss: 0.2903 - val_acc: 0.8854 - val_binary_crossentropy: 0.2903
Epoch 7/20
 - 8s - loss: 0.1900 - acc: 0.9368 - binary_crossentropy: 0.1900 - val_loss: 0.3101 - val_acc: 0.8832 - val_binary_crossentropy: 0.3101
Epoch 8/20
 - 8s - loss: 0.1656 - acc: 0.9456 - binary_crossentropy: 0.1656 - val_loss: 0.3192 - val_acc: 0.8840 - val_binary_crossentropy: 0.3192
Epoch 9/20
 - 8s - loss: 0.1520 - acc: 0.9488 - binary_crossentropy: 0.1520 - val_loss: 0.3468 - val_acc: 0.8814 - val_binary_crossentropy: 0.3468
Epoch 10/20
 - 7s - loss: 0.1376 - acc: 0.9524 - binary_crossentropy: 0.1376 - val_loss: 0.3632 - val_acc: 0.8808 - val_binary_crossentropy: 0.3632
Epoch 11/20
 - 4s - loss: 0.1230 - acc: 0.9580 - binary_crossentropy: 0.1230 - val_loss: 0.3925 - val_acc: 0.8796 - val_binary_crossentropy: 0.3925
Epoch 12/20
 - 5s - loss: 0.1120 - acc: 0.9611 - binary_crossentropy: 0.1120 - val_loss: 0.4139 - val_acc: 0.8791 - val_binary_crossentropy: 0.4139
Epoch 13/20
 - 6s - loss: 0.1025 - acc: 0.9632 - binary_crossentropy: 0.1025 - val_loss: 0.4263 - val_acc: 0.8769 - val_binary_crossentropy: 0.4263
Epoch 14/20
 - 4s - loss: 0.0960 - acc: 0.9658 - binary_crossentropy: 0.0960 - val_loss: 0.4587 - val_acc: 0.8750 - val_binary_crossentropy: 0.4587
Epoch 15/20
 - 4s - loss: 0.0876 - acc: 0.9680 - binary_crossentropy: 0.0876 - val_loss: 0.4755 - val_acc: 0.8755 - val_binary_crossentropy: 0.4755
Epoch 16/20
 - 5s - loss: 0.0842 - acc: 0.9687 - binary_crossentropy: 0.0842 - val_loss: 0.4955 - val_acc: 0.8747 - val_binary_crossentropy: 0.4955
Epoch 17/20
 - 4s - loss: 0.0808 - acc: 0.9702 - binary_crossentropy: 0.0808 - val_loss: 0.5094 - val_acc: 0.8769 - val_binary_crossentropy: 0.5094
Epoch 18/20
 - 5s - loss: 0.0787 - acc: 0.9700 - binary_crossentropy: 0.0787 - val_loss: 0.5444 - val_acc: 0.8757 - val_binary_crossentropy: 0.5444
Epoch 19/20
 - 5s - loss: 0.0744 - acc: 0.9712 - binary_crossentropy: 0.0744 - val_loss: 0.5404 - val_acc: 0.8730 - val_binary_crossentropy: 0.5404
Epoch 20/20
 - 7s - loss: 0.0739 - acc: 0.9715 - binary_crossentropy: 0.0739 - val_loss: 0.5570 - val_acc: 0.8724 - val_binary_crossentropy: 0.5570

drop out을 적용한, dpt_model을 구성하였다.

각각의 레이어 다음에 0.5만큼의 drop out을 하도록 설정하였다.

dpt_model 또한 기준 모델과 비교해본다.

In [60]:
plot_history([('baseline', baseline_history),
              ('dropout', dpt_model_history)])

이렇게, 우리는 overfitting을 해결하기 위해 정규화와, 드랍아웃 두가지를 살펴보았다.

이 외에도 아래와 같은 방법을로 overfitting을 해소할 수 있다.

  • Get more training data.
  • Reduce the capacity of the network.
  • Add weight regularization.
  • Add dropout.

tensorflow 튜토리얼의 overfitting and underfitting의 문서는 위의 내용까지이다.

추가적으로 L2정규화와, 드랍아웃을 함께 적용한 모델을 만들고 테스트 해보았다.

In [61]:
l2_dpt_model = keras.models.Sequential([
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
                       activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
                       activation=tf.nn.relu),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
])

l2_dpt_model.compile(optimizer='adam',
                 loss='binary_crossentropy',
                 metrics=['accuracy', 'binary_crossentropy'])

l2_dpt_model_history = l2_dpt_model.fit(train_data, train_labels,
                                epochs=20,
                                batch_size=512,
                                validation_data=(test_data, test_labels),
                                verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
 - 8s - loss: 0.6572 - acc: 0.6607 - binary_crossentropy: 0.6204 - val_loss: 0.5217 - val_acc: 0.8572 - val_binary_crossentropy: 0.4884
Epoch 2/20
 - 7s - loss: 0.4884 - acc: 0.8236 - binary_crossentropy: 0.4540 - val_loss: 0.3850 - val_acc: 0.8816 - val_binary_crossentropy: 0.3488
Epoch 3/20
 - 8s - loss: 0.3984 - acc: 0.8780 - binary_crossentropy: 0.3599 - val_loss: 0.3424 - val_acc: 0.8878 - val_binary_crossentropy: 0.3015
Epoch 4/20
 - 7s - loss: 0.3490 - acc: 0.8994 - binary_crossentropy: 0.3059 - val_loss: 0.3301 - val_acc: 0.8874 - val_binary_crossentropy: 0.2848
Epoch 5/20
 - 7s - loss: 0.3162 - acc: 0.9143 - binary_crossentropy: 0.2684 - val_loss: 0.3309 - val_acc: 0.8865 - val_binary_crossentropy: 0.2806
Epoch 6/20
 - 7s - loss: 0.2951 - acc: 0.9214 - binary_crossentropy: 0.2429 - val_loss: 0.3382 - val_acc: 0.8872 - val_binary_crossentropy: 0.2840
Epoch 7/20
 - 7s - loss: 0.2763 - acc: 0.9281 - binary_crossentropy: 0.2201 - val_loss: 0.3501 - val_acc: 0.8837 - val_binary_crossentropy: 0.2922
Epoch 8/20
 - 7s - loss: 0.2663 - acc: 0.9337 - binary_crossentropy: 0.2065 - val_loss: 0.3682 - val_acc: 0.8826 - val_binary_crossentropy: 0.3066
Epoch 9/20
 - 7s - loss: 0.2606 - acc: 0.9355 - binary_crossentropy: 0.1975 - val_loss: 0.3688 - val_acc: 0.8818 - val_binary_crossentropy: 0.3043
Epoch 10/20
 - 7s - loss: 0.2468 - acc: 0.9412 - binary_crossentropy: 0.1811 - val_loss: 0.3903 - val_acc: 0.8787 - val_binary_crossentropy: 0.3231
Epoch 11/20
 - 8s - loss: 0.2433 - acc: 0.9438 - binary_crossentropy: 0.1746 - val_loss: 0.4041 - val_acc: 0.8788 - val_binary_crossentropy: 0.3340
Epoch 12/20
 - 8s - loss: 0.2386 - acc: 0.9456 - binary_crossentropy: 0.1674 - val_loss: 0.4017 - val_acc: 0.8767 - val_binary_crossentropy: 0.3291
Epoch 13/20
 - 8s - loss: 0.2349 - acc: 0.9481 - binary_crossentropy: 0.1615 - val_loss: 0.4302 - val_acc: 0.8774 - val_binary_crossentropy: 0.3557
Epoch 14/20
 - 7s - loss: 0.2293 - acc: 0.9500 - binary_crossentropy: 0.1538 - val_loss: 0.4414 - val_acc: 0.8772 - val_binary_crossentropy: 0.3648
Epoch 15/20
 - 8s - loss: 0.2261 - acc: 0.9516 - binary_crossentropy: 0.1487 - val_loss: 0.4367 - val_acc: 0.8774 - val_binary_crossentropy: 0.3582
Epoch 16/20
 - 7s - loss: 0.2263 - acc: 0.9516 - binary_crossentropy: 0.1471 - val_loss: 0.4329 - val_acc: 0.8755 - val_binary_crossentropy: 0.3529
Epoch 17/20
 - 7s - loss: 0.2254 - acc: 0.9534 - binary_crossentropy: 0.1448 - val_loss: 0.4579 - val_acc: 0.8750 - val_binary_crossentropy: 0.3768
Epoch 18/20
 - 8s - loss: 0.2202 - acc: 0.9548 - binary_crossentropy: 0.1386 - val_loss: 0.4616 - val_acc: 0.8748 - val_binary_crossentropy: 0.3797
Epoch 19/20
 - 7s - loss: 0.2215 - acc: 0.9546 - binary_crossentropy: 0.1393 - val_loss: 0.4714 - val_acc: 0.8759 - val_binary_crossentropy: 0.3889
Epoch 20/20
 - 8s - loss: 0.2199 - acc: 0.9564 - binary_crossentropy: 0.1370 - val_loss: 0.4605 - val_acc: 0.8749 - val_binary_crossentropy: 0.3772
In [62]:
plot_history([('baseline', baseline_history),
              ('L2 with dropout', l2_dpt_model_history)])

위와 같이 보다 좋은 결과를 확인할 수 있었다.


블로그 이미지

Tigercow.Door

Web Programming / Back-end / Database / AI / Algorithm / DeepLearning / etc

댓글을 달아 주세요


안녕하세요. 문범우입니다.

이번 포스팅에서는 dropout과 model ensemble에 대해서 살펴보도록 하겠습니다.



1. Dropout


우리가 dropout을 하는 이유는 바로 아래와 같은 overfitting 때문입니다.



우리가 과거에 알아봤던 것처럼, 훈련 data에 있어서는 100%의 accuracy를 내지만, 실제로 test data에 있어서는 높은 예측율을 내지 못하게 되는 현상이죠.



위와 같이, 파란색 그래프, training 에서는 에러율이 점점 낮아지지만, 실제로 빨간색 그래프처럼 test data를 통해 확인해보니 어느 시점부터 에러율이 더 증가하게 됩니다.


이러한 overfitting은 우리가 더 깊게 만들수록 일어날 확률이 커집니다.

왜냐하면 더 깊어질수록 더 많은 변수가 생기기 때문입니다.


그럼 이를 해결하기 위해서는 무슨 방법이 있을까요?


첫번째로는, 더 많은 데이터를 훈련시키는 것입니다.

또는 feature를 줄여주는 방법도 있을 것입니다.

그리고 우리가 예전에 간단히 알아봤던, Regularization 이라는 방법이 있습니다.



우리가 예전에 알아봤던 것처럼, 위의 식과 같이 처리함으로써 Regularization을 하는 L2regularization도 있습니다.


그리고 Neural Network에서는 또다른, dropout이라는 방법이 있습니다.

dropout이란 쉽게 말해서, 위 그림에서 왼쪽 그림과 같은 모델에서 몇개의 연결을 끊어서, 즉 몇개의 노드를 죽이고 남은 노드들을 통해서만 훈련을 하는 것입니다.

이때 죽이는, 쉬게하는 노드들을 랜덤하게 선택합니다.




쉽게 말해 각 노드들을 어떤 전문가라고 생각해본다면 랜덤하게 몇명은 쉬게하고 나머지만 일하게 합니다. 

그리고 마지막에는 모든 전문가들을 총 동원해서 예측을 하게 합니다.


이러한 아이디어가 dropout 입니다.


실제로 텐서플로우에서 구현하기에도 어렵지 않게 가능합니다.

우리가 원래 만들었던 layer를 dropout함수에 넣어서, 몇 퍼센트의 노드가 일하게 할 것인지 함께 적어줍니다.

위의 코드를 보면 Train에서는 0.7, 즉 70%의 노드들이 랜덤하게 훈련되게 하였습니다.

그리고 실수하면 안되는 점이 Evaluation 과정에서는 dropout_rate를 1로 함으로써 100%의 노드들을 참가시키도록 해야 합니다.



2. Ensemble



우리가 추후, 학습시킬 수 있는 장비가 많을때 사용할 수 있는 또 하나의 방법도 있습니다.

Ensemble이라고 하는 것인데, 위와 같이 여러개의 독립적인 모델을 만듭니다. 이때 훈련 데이터셋은 별도로 해도 되고, 모두 같은 훈련 데이터셋을 이용해도 상관 없습니다.

이때 각 모델의 초기값이 서로 다르기때문에 결과도 약간씩 다를 것입니다.

그리고 이후에 독립적인 모델들을 모두 합쳐서 한번에 예측을 하게 합니다.

즉, 이것은 전문가 한명에게 어떤 질문을 하는 것이 아니고 서로 독립적인 전문가 다수를 모아두고 질문을 하는 것과 같습니다.


실제로 Ensembel을 이용하면 2%~5%까지도 예측율이 올라간다고 합니다.


블로그 이미지

Tigercow.Door

Web Programming / Back-end / Database / AI / Algorithm / DeepLearning / etc

댓글을 달아 주세요