我正在尝试学习 TensorFlow 并研究以下示例:https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/autoencoder.ipynb https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/autoencoder.ipynb
然后我对下面的代码有一些疑问:
for epoch in range(training_epochs):
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1),
"cost=", "{:.9f}".format(c))
由于 mnist 只是一个数据集,那么它到底做了什么mnist.train.next_batch
意思是?怎么样dataset.train.next_batch
定义?
Thanks!
The mnist
对象是从返回的read_data_sets()功能 https://github.com/tensorflow/tensorflow/blob/7c36309c37b04843030664cdc64aca2bb7d6ecaa/tensorflow/contrib/learn/python/learn/datasets/mnist.py#L189定义在tf.contrib.learn
模块。这mnist.train.next_batch(batch_size)
方法已实施here https://github.com/tensorflow/tensorflow/blob/7c36309c37b04843030664cdc64aca2bb7d6ecaa/tensorflow/contrib/learn/python/learn/datasets/mnist.py#L160,它返回两个数组的元组,其中第一个代表一批batch_size
MNIST 图像,第二个代表一批batch-size
与这些图像对应的标签。
图像作为大小为 2-D NumPy 数组返回[batch_size, 784]
(因为 MNIST 图像中有 784 个像素),并且标签以大小为 1-D NumPy 数组的形式返回[batch_size]
(if read_data_sets()
被称为one_hot=False
) 或大小为 2-D NumPy 数组[batch_size, 10]
(if read_data_sets()
被称为one_hot=True
).
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)