一、Tensorflow入门
1、计算图:
- 每一个计算都是计算图上的一个结点,而节点之间的边描述了计算之间的依赖关系。
- 支持通过tf.Graph生成新的计算图,不同计算图上的张量和运算不会共享。
- Tensorflow会自动生成一个默认的计算图,如果没有特殊指定,运算会自动加入这个计算图中。
import tensorflow as tf
def BP_NeuralNetwork():
g = tf.Graph()
with g1.as_default():
v = tf.get_variable("v", shape=[1], initializer=tf.zeros_initializer)
with tf.Session(graph=g1) as sess:
print(sess.run(v))
2、张量:张量中保存了三个属性:名字、维度和类型。其中名字,不仅是唯一标识符,还显示了该张量是如何被计算出来的(某个结点/计算的第几个输出);关于类型,需要注意的是,如果不指定,tensorflow会给出默认类型。
3、会话:拥有并管理Tensorflow程序运行时的所有资源。
with tf.Session() as sess:
sess.run()
sess = tf.InteractiveSession()
print(result.eval())
二、实现的简单BP神经网络
数据来源是Kaggle中的mushroom数据集,是一个有22个特征的二分类问题,这里是将22个特征拆为了117个特征输入。实现的过程比较简单,主要是参考了kaggle中的一个kernal和《TensorFlow实战Google深度学习框架》。刚开始使用了6层隐藏层,后来减为1层隐藏层后效果也很不错(测试集正确率100%)。
import tensorflow as tf
import numpy as np
import pandas as pd
def get_wight(shape, lambdal):
# np.random.randn 标准正态分布 期望为 0 方差为 1
# random initialization
var = tf.Variable(tf.random_normal(shape, mean=0, stddev=0.1),dtype=tf.float32)
# Xavier initialization
# var = tf.Variable(np.random.randn(shape[0], shape[1]), dtype=tf.float32) / np.sqrt(shape[0])
# He initialization
# var = tf.Variable(np.random.randn(shape[0], shape[1]), dtype=tf.float32) / np.sqrt(shape[0] / 2)
# var = tf.Variable(tf.truncated_normal(shape, stddev=2./np.math.sqrt(shape[0])),dtype=tf.float32)
tf.add_to_collection('losses',tf.contrib.layers.l2_regularizer(lambdal)(var))
return var
data = pd.read_csv('input/Result.csv')
train = data.iloc[:8000,:]
test = data.iloc[8000:,:]
x = tf.placeholder("float",[None, 117])
y_ = tf.placeholder("float", [None, 2])
#每层节点的个数
laryer_dimension = [117, 128, 2]
n_layers = len(laryer_dimension)
cur_layer = x
in_dimension = laryer_dimension[0]
#生成网络结构
for i in range(1, n_layers):
out_dimension = laryer_dimension[i]
weight = get_wight([in_dimension,out_dimension], 0.001)
bias = tf.Variable(tf.constant(0.1, shape = [out_dimension]))
if i < n_layers-1:
cur_layer = tf.nn.relu(tf.matmul(cur_layer, weight) + bias)
else:
cur_layer = tf.nn.softmax(tf.matmul(cur_layer, weight) + bias)
in_dimension = laryer_dimension[i]
y = cur_layer
#损失函数
# mse_loss = tf.losses.mean_squared_error(y_, y)
cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_, logits=y))
tf.add_to_collection('losses', cross_entropy)
loss = tf.add_n(tf.get_collection('losses'))
correct_prediction = tf.equal(tf.argmax(y_,1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
train_op = tf.train.AdamOptimizer(0.003).minimize(loss)
TRAINING_STEPS = 101
fig_loss = np.zeros([TRAINING_STEPS])
fig_accuracy = np.zeros([TRAINING_STEPS])
with tf.Session() as sess:
saver = tf.train.Saver()
tf.global_variables_initializer().run()
for i in range(TRAINING_STEPS):
train_data = train.sample(1000)
train_features = train_data.iloc[:,2:].values
train_target = train_data.iloc[:,0:2].values
sess.run(train_op, feed_dict={x:train_features,y_:train_target})
fig_loss[i] = sess.run(loss, feed_dict={x:train_features,y_:train_target})
fig_accuracy[i] = sess.run(accuracy, feed_dict={x:train_features,y_:train_target})
test_data = test.sample(122)
test_features = test_data.iloc[:,2:].values
test_target = test_data.iloc[:,0:2].values
test_acc = sess.run(accuracy, feed_dict={x:test_features, y_:test_target})
print("test_acc: " , test_acc)
save_path = saver.save(sess, "./model.ckpt")
np.save('output/RNIn_001_loss.npy',fig_loss)
np.save('output/RNIn_001_acc.npy',fig_accuracy)
三、BP神经网络初始化
深度学习中的权值初始化对模型收敛速度和模型质量有重要影响,在隐藏层的连接中,我们通常使用ReLU,相对应的权重初始化有Xavier Initialization和变种He Initialization。
这里我对权值初始化进行了一些测试,首先是随机初始化,效果非常差,其他是基于正态分布做的。
1、使用正态分布初始化,这里我均值都设为0,方差由0.1到1测试时发现,方差越小(不能为0)损失减少越快,正确率提高越快。
var = tf.Variable(tf.random_normal(shape, mean=0, stddev=0.1),dtype=tf.float32)
2、Xavier Initialization
var = tf.Variable(np.random.randn(shape[0], shape[1]), dtype=tf.float32) / np.sqrt(shape[0])
#截断正态分布随机数
var = tf.Variable(tf.truncated_normal(shape, stddev=2./np.math.sqrt(shape[0])),dtype=tf.float32)
3、He Initialization
var = tf.Variable(np.random.randn(shape[0], shape[1]), dtype=tf.float32) / np.sqrt(shape[0] / 2)
四、在使用tensorflow编写BP网络中遇到的问题及解决
1、训练的LOSS一直为0
对于二分类问题,最后的Loss不能用tf.nn.softmax_cross_entropy_with_logits来计算,而应该用tf.nn.sigmoid_cross_entropy_with_logits。因为计算softmax,对于二分类问题其值永远为1,然后在计算交叉熵时,就导致cost永远为0。
2、二分类问题计算正确率:输出结点需设置为两个,否则argmax()无法使用
correct_prediction = tf.equal(tf.argmax(y_,1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
参考资料:
权值初始化:https://zhuanlan.zhihu.com/p/25110150
Loss为零:https://blog.csdn.net/qq_34661230/article/details/88313252
正则化:https://github.com/caicloud/tensorflow-tutorial/blob/master/Deep_Learning_with_TensorFlow/1.4.0/Chapter04/3.%20%E6%AD%A3%E5%88%99%E5%8C%96.ipynb