我想了解对数损失函数的梯度和粗麻布是如何计算的xgboost 示例脚本 https://raw.githubusercontent.com/dmlc/xgboost/master/demo/guide-python/custom_objective.py.
我简化了函数来获取 numpy 数组,并生成y_hat
and y_true
这是脚本中使用的值的示例。
这是一个简化的示例:
import numpy as np
def loglikelihoodloss(y_hat, y_true):
prob = 1.0 / (1.0 + np.exp(-y_hat))
grad = prob - y_true
hess = prob * (1.0 - prob)
return grad, hess
y_hat = np.array([1.80087972, -1.82414818, -1.82414818, 1.80087972, -2.08465433,
-1.82414818, -1.82414818, 1.80087972, -1.82414818, -1.82414818])
y_true = np.array([1., 0., 0., 1., 0., 0., 0., 1., 0., 0.])
loglikelihoodloss(y_hat, y_true)
The log loss function is the sum of where .
The gradient (with respect to p) is then however in the code its .
Likewise the second derivative (with respect to p) is however in the code it is .
方程如何相等?