上一篇博文介绍了读取csv文件和可视化数据的过程,完成这两步后,我们对数据集和问题有了直观的理解,而天池、kaggle等竞赛会给我们需要提交结果的数据集和有标签的数据集。接下来,我们需要对数据集进行划分,以用于模型训练和验证.
数据集的划分
- 将有标签的数据集划分为训练集和测试集,以验证我们最终提交模型的泛化能力.
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_vest = train_test_split(x, y, test_size=0.2)
-
方案一:交叉验证策略
K折交叉验证,均分为K组,1组验证,K-1组训练,以K次训练在验证集上的平均指标作为性能指标.
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
kfold = KFold(n_splits=5, random_state=7)
classify = LogisticRegression()
result = cross_val_score(classify, x_train, y_train, cv=kfold)
-方案二:弃一交叉验证策略
N个样本,每次用N-1个训练,1个验证,共进行N次,验证集上的平均指标作为性能指标.
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import cross_val_score
loocv = LeaveOneOut()
classify = LogisticRegression()
result = cross_val_score(classify, x_train, y_train, cv=loocv)
数据预处理
scikit-learn提供两种标准的格式化数据方法.
- 方法1:Fit and Multiple Transform
先调用 fit()
函数来准备数据转换的参数,然后调用 transform()
函数来做数据的预处理.
在一般使用中,会fit训练集,然后在验证集和测试集上做相同的t ransform()
,切记不要再次 fit
- 方法2:Combined Fit-and-Transform
fit
和 transform
同时进行
主要数据预处理的方式有:最大最小归一化,正态归一化,标准化和二值化,其原理都比较简单,代码示例如下:
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import Binarizer
MMScalar = MinMaxScaler(feature_range=(0,1))
new_train_x1 = MMScalar.fit_transform(x_train)
new_test_x1 = MMScalar.transform(x_valid)
SScaler = StandardScaler()
new_train_x2 = SScaler.fit_transform(x_train)
new_test_x2 = SScaler.transform(x_valid)
NScaler = Normalizer()
new_train_x3 = NScaler.fit_transform(x_train)
new_test_x3 = NScaler.transform(x_valid)
BScaler = Binarizer(threshold=0.0)
new_train_x4 = BScaler.fit_transform(x_train)
new_test_x4 = BScaler.transform(x_valid)
MMScalar = MinMaxScaler(feature_range=(0,1)).fit(x_train)
new_train_x5 = MMScalar.transform(x_train)
new_test_x5 = MMScalar.transform(x_valid)
特征工程
这里的特征工程主要是选择最优的特征组合,这里主要涉及的方法有单变量,RFE和PCA,直接上代码吧!
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import RFE
from sklearn.decomposition import PCA
from sklearn.feature_selection import chi2
from sklearn.linear_model import LogisticRegression
SKBest = SelectKBest(Score_func=chi2, k=5)
x_train_choose1 = SKBest.fit_transform(new_train_x, y_train)
x_test_choose1 = SKBest.transform(new_train_x)
classify = LogisticRegression()
rfe = RFE(classify,5)
rfe.fit(new_train_x, y_train)
x_train_choose2 = rfe.fit_transform(new_train_x)
x_test_choose2 = rfe.fit(new_test_x)
pca = PCA(n_components=5)
pca.fit(new_train_x, y_train)
x_train_choose3 = pca.fit_transform(new_train_x)
x_test_choose3 = pca.fit(new_test_x)
在本人的实战经验中发现,对特征进行合理的筛选和线性组合可以达到更好的精度,特征工程大法好!
接下来会介绍分类/回归模型和结果的评估方法.
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)