pyspark 中的交叉验证

2024-03-29

我使用交叉验证来训练线性回归模型,使用以下代码:

from pyspark.ml.evaluation import RegressionEvaluator

lr = LinearRegression(maxIter=maxIteration)
modelEvaluator=RegressionEvaluator()
pipeline = Pipeline(stages=[lr])
paramGrid = ParamGridBuilder().addGrid(lr.regParam, [0.1, 0.01]).addGrid(lr.elasticNetParam, [0, 1]).build()

crossval = CrossValidator(estimator=pipeline,
                          estimatorParamMaps=paramGrid,
                          evaluator=modelEvaluator,
                          numFolds=3)

cvModel = crossval.fit(training)

现在我想绘制 roc 曲线,我使用了以下代码,但出现此错误:

“LinearRegressionTrainingSummary”对象没有属性“areaUnderROC”

trainingSummary = cvModel.bestModel.stages[-1].summary
trainingSummary.roc.show()
print("areaUnderROC: " + str(trainingSummary.areaUnderROC))

我还想在每次迭代时检查客观历史记录,我知道我可以在最后得到它

print("numIterations: %d" % trainingSummary.totalIterations)
print("objectiveHistory: %s" % str(trainingSummary.objectiveHistory))

但我想在每次迭代时得到它,我该怎么做?

此外,我想根据测试数据评估模型,我该怎么做?

prediction = cvModel.transform(test)

我知道对于训练数据集我可以写:

print("RMSE: %f" % trainingSummary.rootMeanSquaredError)
print("r2: %f" % trainingSummary.r2)

但我怎样才能获得这些指标来测试数据集呢?


1)ROC曲线下面积(AUC)为defined https://en.wikipedia.org/wiki/Receiver_operating_characteristic仅适用于二元分类,因此您不能将它用于回归任务,就像您在这里尝试做的那样。

2) The objectiveHistory对于每次迭代仅在以下情况下可用solver回归中的参数是l-bfgs (文档 https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegressionTrainingSummary.objectiveHistory);这是一个玩具示例:

spark.version
# u'2.1.1'

from pyspark.ml import Pipeline
from pyspark.ml.linalg import Vectors
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import LinearRegression
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder

dataset = spark.createDataFrame(
        [(Vectors.dense([0.0]), 0.2),
         (Vectors.dense([0.4]), 1.4),
         (Vectors.dense([0.5]), 1.9),
         (Vectors.dense([0.6]), 0.9),
         (Vectors.dense([1.2]), 1.0)] * 10,
         ["features", "label"])

lr = LinearRegression(maxIter=5, solver="l-bfgs") # solver="l-bfgs" here

modelEvaluator=RegressionEvaluator()
pipeline = Pipeline(stages=[lr])
paramGrid = ParamGridBuilder().addGrid(lr.regParam, [0.1, 0.01]).addGrid(lr.elasticNetParam, [0, 1]).build()

crossval = CrossValidator(estimator=lr,
                          estimatorParamMaps=paramGrid,
                          evaluator=modelEvaluator,
                          numFolds=3)

cvModel = crossval.fit(dataset)

trainingSummary = cvModel.bestModel.summary

trainingSummary.totalIterations
# 2
trainingSummary.objectiveHistory # one value for each iteration
# [0.49, 0.4511834723904831]

3)你已经定义了一个RegressionEvaluator您可以使用它来评估您的测试集,但如果不带参数使用,它会采用 RMSE 指标;这是一种使用不同指标定义评估器并将它们应用到您的测试集的方法(继续上面的代码):

test = spark.createDataFrame(
        [(Vectors.dense([0.0]), 0.2),
         (Vectors.dense([0.4]), 1.1),
         (Vectors.dense([0.5]), 0.9),
         (Vectors.dense([0.6]), 1.0)],
        ["features", "label"])

modelEvaluator.evaluate(cvModel.transform(test))  # rmse by default, if not specified
# 0.35384585061028506

eval_rmse = RegressionEvaluator(metricName="rmse")
eval_r2 = RegressionEvaluator(metricName="r2")

eval_rmse.evaluate(cvModel.transform(test)) # same as above
# 0.35384585061028506

eval_r2.evaluate(cvModel.transform(test))
# -0.001655087952929124
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

pyspark 中的交叉验证 的相关文章

随机推荐