该模型没有收敛,问题似乎是您正在执行 sigmoid 激活,然后直接进行tf.nn.softmax_cross_entropy_with_logits
。在文档中tf.nn.softmax_cross_entropy_with_logits
它说:
警告:此操作需要未缩放的 logits,因为它执行softmax
on logits
内部为了效率。不要使用以下输出调用此操作softmax
,因为它会产生不正确的结果。
因此,在传递到前一层的输出之前,不应对前一层的输出进行 softmax、sigmoid、relu、tanh 或任何其他激活tf.nn.softmax_cross_entropy_with_logits
。
有关何时使用 sigmoid 或 softmax 输出激活的更深入描述,请参阅here.
因此通过替换return tf.nn.sigmoid(lr)
只用return lr
in the logistic_regression
函数,模型是收敛的。
下面是经过上述修复的代码的工作示例。我还更改了变量名称epochs
to n_batches
因为你的训练循环实际上经历了 1000 个批次,而不是 1000 个时期(我也将其提高到 10000,因为有迹象表明需要更多迭代)。
from tensorflow.keras.datasets import fashion_mnist
from sklearn.model_selection import train_test_split
import tensorflow as tf
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train/255., x_test/255.
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.15)
x_train = tf.reshape(x_train, shape=(-1, 784))
x_test = tf.reshape(x_test, shape=(-1, 784))
weights = tf.Variable(tf.random.normal(shape=(784, 10), dtype=tf.float64))
biases = tf.Variable(tf.random.normal(shape=(10,), dtype=tf.float64))
def logistic_regression(x):
lr = tf.add(tf.matmul(x, weights), biases)
#return tf.nn.sigmoid(lr)
return lr
def cross_entropy(y_true, y_pred):
y_true = tf.one_hot(y_true, 10)
loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
return tf.reduce_mean(loss)
def accuracy(y_true, y_pred):
y_true = tf.cast(y_true, dtype=tf.int32)
preds = tf.cast(tf.argmax(y_pred, axis=1), dtype=tf.int32)
preds = tf.equal(y_true, preds)
return tf.reduce_mean(tf.cast(preds, dtype=tf.float32))
def grad(x, y):
with tf.GradientTape() as tape:
y_pred = logistic_regression(x)
loss_val = cross_entropy(y, y_pred)
return tape.gradient(loss_val, [weights, biases])
n_batches = 10000
learning_rate = 0.01
batch_size = 128
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.repeat().shuffle(x_train.shape[0]).batch(batch_size)
optimizer = tf.optimizers.SGD(learning_rate)
for batch_numb, (batch_xs, batch_ys) in enumerate(dataset.take(n_batches), 1):
gradients = grad(batch_xs, batch_ys)
optimizer.apply_gradients(zip(gradients, [weights, biases]))
y_pred = logistic_regression(batch_xs)
loss = cross_entropy(batch_ys, y_pred)
acc = accuracy(batch_ys, y_pred)
print("Batch number: %i, loss: %f, accuracy: %f" % (batch_numb, loss, acc))
(removed printouts)
>> Batch number: 1000, loss: 2.868473, accuracy: 0.546875
(removed printouts)
>> Batch number: 10000, loss: 1.482554, accuracy: 0.718750