728x90
1. 평균 제곱 오차(mean squared error, MSE)
1
2
3
4
5
6
7
|
# calculate mean squared error
def mean_squared_error(actual, predicted):
sum_square_error = 0.0
for i in range(len(actual)):
sum_square_error += (actual[i] - predicted[i])**2.0
mean_square_error = 1.0 / len(actual) * sum_square_error
return mean_square_error
|
2. 교차 엔트로피 오차(binary cross entropy error, CEE)
1
2
3
4
5
6
7
8
9
|
from math import log
# calculate binary cross entropy
def binary_cross_entropy(actual, predicted):
sum_score = 0.0
for i in range(len(actual)):
sum_score += actual[i] * log(1e-15 + predicted[i])
mean_sum_score = 1.0 / len(actual) * sum_score
return -mean_sum_score
|
3. 교차 엔트로피 오차(categorical cross entropy error, CEE)
1
2
3
4
5
6
7
8
9
10
|
from math import log
# calculate categorical cross entropy
def categorical_cross_entropy(actual, predicted):
sum_score = 0.0
for i in range(len(actual)):
for j in range(len(actual[i])):
sum_score += actual[i][j] * log(1e-15 + predicted[i][j])
mean_sum_score = 1.0 / len(actual) * sum_score
return -mean_sum_score
|
Loss and Loss Functions for Training Deep Learning Neural Networks
Neural networks are trained using stochastic gradient descent and require that you choose a loss function when designing and configuring your model. There are many loss functions to choose from and it can be challenging to know what to choose, or even what
machinelearningmastery.com
https://soobarkbar.tistory.com/43
Triplet Loss
Triplet Loss (wiki) Triplet loss는 baseline인 anchor를 positive, negative input들과 비교하는 인공 신경 네트워크에 대한 손실 함수 (loss function)임. anchor input과 positive input 사이의 거리는 최소화..
soobarkbar.tistory.com
728x90
'인공지능' 카테고리의 다른 글
data가져오기 (0) | 2022.06.26 |
---|---|
Loss function 어떤걸써야하지? (7) | 2022.06.20 |
U-net model (0) | 2022.06.10 |
[Keras]Video Vision Transformer (4) | 2022.05.04 |
[Keras]Timeseries classification with a Transformer model (0) | 2022.05.04 |