1.数据集:
传送门:内含刘老师讲课视频PPT及相关数据集,本文所用数据集名为diabetes.cvs.gz
链接:https://pan.baidu.com/s/1vZ27gKp8Pl-qICn_p2PaSw
提取码:cxe4
其中,x1,,,x8表示不同特征,y表示分类。
2.模型:
刘老师视频中采用以上模型,本文线性层输出特征改为4,2,1,其他保持不变。
loss:BCELoss
optimizer:SGD
3.python代码:
本文采用pytorch定义的Dataset,DataLoader,以Minibatch的风格加载数据集
import numpy as np
import torch
from torch import nn
from torch.nn import Linear, BCELoss
from torch.optim import SGD
import matplotlib.pyplot as plt
# 准备DataSet,DataLoader
from torch.utils.data import Dataset, DataLoader
# 数据集路径名
path = "diabetes.csv.gz"
# 数据集类
class dataset(Dataset):
def __init__(self, path):
# 因为数据集较小,直接加载在内存里
xy = np.loadtxt(path, delimiter=',', dtype=np.float32)
self.length = xy.shape[0]
self.x_data = torch.from_numpy(xy[:, :-1])
self.y_data = torch.from_numpy(xy[:, [-1]])
def __getitem__(self, item):
return self.x_data[item], self.y_data[item]
def __len__(self):
return self.length
# 数据集类实例化
my_dataset = dataset(path)
train_loader = DataLoader(my_dataset, batch_size=10, shuffle=True)
# 建立模型,3个线性层,3个sigmoid非线性激活函数
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.linear1 = Linear(8, 4, bias=True)
self.linear2 = Linear(4, 2, bias=True)
self.linear3 = Linear(2, 1, bias=True)
def forward(self, x):
x = torch.sigmoid(self.linear1(x))
x = torch.sigmoid(self.linear2(x))
x = torch.sigmoid(self.linear3(x))
return x
# 类实例化
my_model = model()
# 二分类问题,继续采用BCELoss
loss_cal = BCELoss(size_average=True)
# 随机梯度下降
optimizer = SGD(my_model.parameters(), lr=0.01)
# 空列表
epoch_list = []
loss_list = []
for epoch in range(100000):
for data in train_loader:
x, y = data
epoch_list.append(epoch)
# 前向计算
y_pred = my_model(x)
loss = loss_cal(y_pred, y)
loss_list.append(loss.item())
# 梯度清零
optimizer.zero_grad()
# 反向传播
loss.backward()
# 参数调整
optimizer.step()
# 画出loss随epoch变化曲线图
plt.figure()
plt.plot(epoch_list, loss_list)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.show()
import torch
import numpy as np
from torch import nn
from torch.nn import Module
x_arr=np.squeeze(np.array([np.random.rand(10,1) for i in range(10)]),2)
y_arr=np.array([[np.random.randint(0,2)] for i in range(10)])
x_list_tensor=torch.from_numpy(x_arr).float()
y_list_tensor=torch.from_numpy(y_arr).float()
class model(Module):
def __init__(self):
super(model,self).__init__()
self.linear=nn.Linear(10,1)
def forward(self,x):
x=self.linear(x)
x=torch.sigmoid(x)
return x
my_model=model()
optimizer=torch.optim.SGD(my_model.parameters(),lr=1e-3,momentum=0.08,weight_decay=0.001)
criterition=torch.nn.BCELoss(size_average=True)
for i in range(1000):
my_model.train()
for x,y in zip(x_list_tensor,y_list_tensor):
y_pred=my_model(x)
print(x,y,y_pred)
loss=criterition(y_pred,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(my_model.linear.weight.data)
4.可视化结果:
随着epoch增加,loss逐渐减小并收敛。
5.以上均为个人学习pytorch基础入门中的基础,浅做记录,如有错误,请各位大佬批评指正!
6.关于问题描述和原理的部分图片参考刘老师的视频课件,本文也是课后作业的一部分,特此附上视频链接,《PyTorch深度学习实践》完结合集_哔哩哔哩_bilibili,希望大家都有所进步!