如何使 Keras 神经网络在鸢尾花数据上优于逻辑回归

2024-03-31

我正在将 Keras 神经网络与简单的比较Scikit-learn 的逻辑回归 http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html关于 IRIS 数据。我预计 Keras-NN 会表现得更好,正如建议的那样这个帖子 http://blog.fastforwardlabs.com/post/139921712388/hello-world-in-keras-or-scikit-learn-versus.

但是为什么模仿那里的代码,Keras-NN的结果低于 逻辑回归?

import seaborn as sns
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegressionCV
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils

# Prepare data
iris = sns.load_dataset("iris")
X = iris.values[:, 0:4]
y = iris.values[:, 4]

# Make test and train set
train_X, test_X, train_y, test_y = train_test_split(X, y, train_size=0.5, random_state=0)

################################
# Evaluate Logistic Regression
################################
lr = LogisticRegressionCV()
lr.fit(train_X, train_y)
pred_y = lr.predict(test_X)
print("Test fraction correct (LR-Accuracy) = {:.2f}".format(lr.score(test_X, test_y)))



################################
# Evaluate Keras Neural Network
################################

# Make ONE-HOT
def one_hot_encode_object_array(arr):
    '''One hot encode a numpy array of objects (e.g. strings)'''
    uniques, ids = np.unique(arr, return_inverse=True)
    return np_utils.to_categorical(ids, len(uniques))


train_y_ohe = one_hot_encode_object_array(train_y)
test_y_ohe = one_hot_encode_object_array(test_y)

model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')

# Actual modelling
model.fit(train_X, train_y_ohe, verbose=0, batch_size=1)
score, accuracy = model.evaluate(test_X, test_y_ohe, batch_size=16, verbose=0)
print("Test fraction correct (NN-Score) = {:.2f}".format(score))
print("Test fraction correct (NN-Accuracy) = {:.2f}".format(accuracy))

我正在使用这个版本的 Keras

In [2]: keras.__version__
Out[2]: '1.0.1'

结果显示:

Test fraction correct (LR-Accuracy) = 0.83
Test fraction correct (NN-Score) = 0.75
Test fraction correct (NN-Accuracy) = 0.60

根据那个帖子 http://blog.fastforwardlabs.com/post/139921712388/hello-world-in-keras-or-scikit-learn-versus,Keras 的准确率应为 0.99。什么地方出了错?


默认的 epoch 数量从 Keras 版本 0 中的 100 减少到本月(2016 年 4 月)刚刚发布的 Keras 版本 1 中的 10。尝试:

model.fit(train_X, train_y_ohe, verbose=0, batch_size=1, nb_epoch=100)
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

如何使 Keras 神经网络在鸢尾花数据上优于逻辑回归 的相关文章

随机推荐