xgboost 的 github 存储库中的自定义目标函数示例脚本中如何计算对数损失的梯度和粗麻布?

2024-03-19

我想了解对数损失函数的梯度和粗麻布是如何计算的xgboost 示例脚本 https://raw.githubusercontent.com/dmlc/xgboost/master/demo/guide-python/custom_objective.py.

我简化了函数来获取 numpy 数组,并生成y_hat and y_true这是脚本中使用的值的示例。

这是一个简化的示例:

import numpy as np


def loglikelihoodloss(y_hat, y_true):
    prob = 1.0 / (1.0 + np.exp(-y_hat))
    grad = prob - y_true
    hess = prob * (1.0 - prob)
    return grad, hess

y_hat = np.array([1.80087972, -1.82414818, -1.82414818,  1.80087972, -2.08465433,
                  -1.82414818, -1.82414818,  1.80087972, -1.82414818, -1.82414818])
y_true = np.array([1.,  0.,  0.,  1.,  0.,  0.,  0.,  1.,  0.,  0.])

loglikelihoodloss(y_hat, y_true)

The log loss function is the sum of enter image description here where enter image description here.

The gradient (with respect to p) is then enter image description here however in the code its enter image description here.

Likewise the second derivative (with respect to p) is enter image description here however in the code it is enter image description here.

方程如何相等?


对数损失函数如下:

where

取偏导数,我们得到梯度为

因此我们得到梯度的负值p-y.

可以进行类似的计算以获得粗麻布。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

xgboost 的 github 存储库中的自定义目标函数示例脚本中如何计算对数损失的梯度和粗麻布? 的相关文章

随机推荐