ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • TensorFlow 기초 8 - cost와 w(기울기) 구하기(함수 사용 x - 원리 이해)
    TensorFlow 2022. 11. 29. 16:25

     

    라이브러리에 있는 함수(SGD, RMSprop, Adam)로 자동으로 계산되지만, 원리의 이해를 위해 파이썬으로 작성해 보았다.

    # 모델의 정확도가 높을수록 비용함수 값은 낮아진다.
    import numpy as na
    import math
    real = [10, 9, 3, 2, 11] # y 실제값
    pred = [11, 5, 2, 4, 3]  # y 예측값(모델에 의해 얻어진 값이라 가정)
    
    cost = 0
    
    for i in range(5):
        cost += math.pow(pred[i] - real[i], 2) # 거듭제곱
        print(cost)
    
    print('cost :', cost / len(pred))
    
    print()
    real = [10, 9, 3, 2, 11] # y 실제값
    pred = [11, 8, 4, 3, 11] # y 예측값(모델에 의해 얻어진 값이라 가정)
    
    cost = 0
    
    for i in range(5):
        cost += math.pow(pred[i] - real[i], 2) # pow() 함수는 y의 거듭제곱에 대한 x의 값을 계산합니다.
        print(cost)
    
    print('cost :', cost / len(pred))
    
    print('-----------')
    # 가중치(W, weight)와 비용함수(cost function, loss, 손실, 비용)의 변화 값을 시각화
    import tensorflow as tf
    import matplotlib.pyplot as plt
    
    x = [1,2,3,4,5]
    y = [1,2,3,4,5]
    b = 0
    
    # hypothesis = x * w + b
    # cost = tf.reduce_sum(tf.pow(hypothesis - y, 2)) / len(x) # reduce_sum은 합을 구한 뒤 차원을 떨어트린다.
    # 비용 함수 = 예측값 - 실제값에 제곱을 하고 그 합에 대한 평균
    
    w_val = []
    cost_val = []
    
    for i in range(-30, 50):
        feed_w = i * 0.1 # 0.1은 학습률(기울기)
        # print(feed_w)
        hypothesis = tf.multiply(feed_w, x) + b # 예측값
        cost = tf.reduce_mean(tf.square(hypothesis - y))
        cost_val.append(cost)
        w_val.append(feed_w)
        print(str(i) + ' ' + ', cost :' + str(cost.numpy()) + ', weight :' + str(feed_w))
    
    plt.plot(w_val, cost_val)
    plt.xlabel('w')
    plt.ylabel('y')
    plt.show()
    
    
    
    <console>
    1.0
    17.0
    18.0
    22.0
    86.0
    cost : 17.2
    
    1.0
    2.0
    3.0
    4.0
    4.0
    cost : 0.8
    -----------
    2022-11-29 16:23:57.775615: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    -30 , cost :176.0, weight :-3.0
    -29 , cost :167.31001, weight :-2.9000000000000004
    -28 , cost :158.84, weight :-2.8000000000000003
    -27 , cost :150.59, weight :-2.7
    -26 , cost :142.55998, weight :-2.6
    -25 , cost :134.75, weight :-2.5
    -24 , cost :127.16001, weight :-2.4000000000000004
    -23 , cost :119.78999, weight :-2.3000000000000003
    -22 , cost :112.64, weight :-2.2
    -21 , cost :105.71, weight :-2.1
    -20 , cost :99.0, weight :-2.0
    -19 , cost :92.51, weight :-1.9000000000000001
    -18 , cost :86.24, weight :-1.8
    -17 , cost :80.19, weight :-1.7000000000000002
    -16 , cost :74.36, weight :-1.6
    -15 , cost :68.75, weight :-1.5
    -14 , cost :63.359997, weight :-1.4000000000000001
    -13 , cost :58.190002, weight :-1.3
    -12 , cost :53.24, weight :-1.2000000000000002
    -11 , cost :48.51, weight :-1.1
    -10 , cost :44.0, weight :-1.0
    -9 , cost :39.71, weight :-0.9
    -8 , cost :35.64, weight :-0.8
    -7 , cost :31.789999, weight :-0.7000000000000001
    -6 , cost :28.16, weight :-0.6000000000000001
    -5 , cost :24.75, weight :-0.5
    -4 , cost :21.56, weight :-0.4
    -3 , cost :18.59, weight :-0.30000000000000004
    -2 , cost :15.839999, weight :-0.2
    -1 , cost :13.31, weight :-0.1
    0 , cost :11.0, weight :0.0
    1 , cost :8.91, weight :0.1
    2 , cost :7.04, weight :0.2
    3 , cost :5.39, weight :0.30000000000000004
    4 , cost :3.9599998, weight :0.4
    5 , cost :2.75, weight :0.5
    6 , cost :1.7599999, weight :0.6000000000000001
    7 , cost :0.99000007, weight :0.7000000000000001
    8 , cost :0.43999997, weight :0.8
    9 , cost :0.11000004, weight :0.9
    10 , cost :0.0, weight :1.0
    11 , cost :0.11000004, weight :1.1
    12 , cost :0.44000012, weight :1.2000000000000002
    13 , cost :0.9899999, weight :1.3
    14 , cost :1.7599999, weight :1.4000000000000001
    15 , cost :2.75, weight :1.5
    16 , cost :3.9600003, weight :1.6
    17 , cost :5.3900003, weight :1.7000000000000002
    18 , cost :7.0399995, weight :1.8
    19 , cost :8.909999, weight :1.9000000000000001
    20 , cost :11.0, weight :2.0
    21 , cost :13.309999, weight :2.1
    22 , cost :15.840001, weight :2.2
    23 , cost :18.59, weight :2.3000000000000003
    24 , cost :21.560001, weight :2.4000000000000004
    25 , cost :24.75, weight :2.5
    26 , cost :28.159998, weight :2.6
    27 , cost :31.790003, weight :2.7
    28 , cost :35.639996, weight :2.8000000000000003
    29 , cost :39.710003, weight :2.9000000000000004
    30 , cost :44.0, weight :3.0
    31 , cost :48.51, weight :3.1
    32 , cost :53.24, weight :3.2
    33 , cost :58.190002, weight :3.3000000000000003
    34 , cost :63.360004, weight :3.4000000000000004
    35 , cost :68.75, weight :3.5
    36 , cost :74.36, weight :3.6
    37 , cost :80.19, weight :3.7
    38 , cost :86.24, weight :3.8000000000000003
    39 , cost :92.51, weight :3.9000000000000004
    40 , cost :99.0, weight :4.0
    41 , cost :105.71, weight :4.1000000000000005
    42 , cost :112.63999, weight :4.2
    43 , cost :119.79, weight :4.3
    44 , cost :127.16001, weight :4.4
    45 , cost :134.75, weight :4.5
    46 , cost :142.55998, weight :4.6000000000000005
    47 , cost :150.59, weight :4.7
    48 , cost :158.84, weight :4.800000000000001
    49 , cost :167.31001, weight :4.9

    위와 같은 원리로 cost와 w(기울기)값을 구할 수 있다.

     

     

    cost와 w(기울기) 시각화

     

    댓글

Designed by Tistory.