安装anaconda后,使用conda指令创建pytorch环境:

1
2
3
4
5
conda create -n pytorch python=3.8.18

创建完毕后,使用命令激活/关闭环境:
conda activate pytorch
conda deactivate
image-20240123093149437

在pytorch命令行环境下安装pytorch:

1
(pytorch) donn@Macc ~ % pip3 install torch torchvision torchaudio

测试是否安装完成:

1
2
1.进入python环境
2.import torch 若未报错,则说明安装成功

tensorboard使用:

–方便地用于记录训练过程中各个阶段的输入:

1
2
tensorboard --logdir=logs --port=6007       
# 此处的logs表示存储日志文件的位置,要根据writer中定义的位置而改变

安装opencv:

1
pip install opencv-python

获取numpy类型的图像:

1
2
3
4
5
6
7
8
9
image_path = "dataset/train/ants_image/0013035.jpg"
from PIL import Image
img = Image.open(image_path)
print(type(img))
<class 'PIL.JpegImagePlugin.JpegImageFile'>
import numpy as np
img_array = np.array(img)
print(type(img_array))
<class 'numpy.ndarray'>

transforms结构及用法:

就是一个封装了对图片进行操作的工具库

​ transforms.py工具箱:

  • totensor
  • resize
  • ……

tensor数据类型:

​ tensor数据类型含有神经网络中需要的各种参数

如何读取Numpy型图像数据:

1
2
import cv2
cv_img = cv2.imread(img_path)

读取PIL类型图像数据实例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from torch.utils.data import Dataset
from PIL import Image
import os


class MyData(Dataset):

def __init__(self, root_dir, label_dir):
self.root_dir = root_dir
self.label_dir = label_dir
self.path = os.path.join(self.root_dir, self.label_dir)
self.img_path = os.listdir(self.path)

def __getitem__(self, idx):
img_name = self.img_path[idx]
img_item_path = os.path.join(self.root_dir, self.label_dir, img_name)
img = Image.open(img_item_path)
label = self.label_dir
return img, label

def __len__(self):
return len(self.img_path)


root_dir = "dataset/train"
ants_label_dir = "ants_image"
bees_label_dir = "bees_image"
ants_dataset = MyData(root_dir, ants_label_dir)
bees_dataset = MyData(root_dir, bees_label_dir)
train_dataset = ants_dataset + bees_dataset

常见的transforms:

Normalize:

归一化操作

1
2
3
4
5
6
# output[channel] = (input[channel] - mean[channel]) / std[channel]
# print(img_tensor[0][0][0])
trans_norm = transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
img_norm = trans_norm(img_tensor)
# print(img_norm[0][0][0])
writer.add_image("Normalize", img_norm, 1)

Resize:

缩放,输入为PIL数据类型

1
2
3
4
5
6
7
# Resize
print(img.size)
trans_resize = transforms.Resize((512, 512))
img_resize = trans_resize(img)
img_resize = trans_totensor(img_resize)
writer.add_image("Resize", img_resize, 0)
print(img_resize)

Compose:

等比缩放(锁定长宽比)

1
2
3
4
5
# Compose
trans_resize_2 = transforms.Resize(256)
trans_compose = transforms.Compose([trans_resize_2, trans_totensor])
img_resize_2 = trans_compose(img)
writer.add_image("Resize", img_resize_2, 1)

总结用法:

  • 使用时先看源码,关注方法输入和输出的数据类型
    • 不知道返回值数据类型时,print试试或者debug看看参数
  • 多看官方文档
  • 关注方法需要的参数

DataSet:

torchvision中的数据集使用:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import torchvision
from torch.utils.tensorboard import SummaryWriter

# 实例化transform的工具
dataset_transform = torchvision.transforms.Compose([
# 将每一张图片都转化为tensor类型,这样就可以在tensorboard中将图片进行显示
torchvision.transforms.ToTensor()
])

train_set = torchvision.datasets.CIFAR10(root="./dataset", train=True, transform=dataset_transform, download=True)
test_set = torchvision.datasets.CIFAR10(root="./dataset", train=False, transform=dataset_transform, download=True)

print(train_set[0])
writer = SummaryWriter("p10")
for i in range(10):
img, target = test_set[i]
writer.add_image("test_set", img, i)

writer.close()


# print(test_set.classes)
#
# img, target = test_set[0]
# print(img)
# print(target)
# img.show()

Dataloader:

数据读取器:对数据集进行读取,可以设置每次读几个、每次读取时是否洗牌、余下的数据还要不要……

Parameters

  • dataset (Dataset) – dataset from which to load the data.
  • batch_size (int, optional) – how many samples per batch to load (default: 1).
  • shuffle (bool, optional) – set to True to have the data reshuffled (洗牌) at every epoch (default: False).
  • sampler (Sampler or Iterable, optional) – defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.
  • batch_sampler (Sampler or Iterable, optional) – like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.
  • num_workers (int, optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
  • collate_fn (Callable, optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.
  • pin_memory (bool, optional) – If True, the data loader will copy Tensors into device/CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below.
  • drop_last (bool, optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)
  • timeout (numeric, optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)
  • worker_init_fn (Callable, optional) – If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)
  • multiprocessing_context (str or multiprocessing.context.BaseContext, optional) – If None, the default multiprocessing context of your operating system will be used. (default: None)
  • generator (torch.Generator, optional) – If not None, this RNG will be used by RandomSampler to generate random indexes and multiprocessing to generate base_seed for workers. (default: None)
  • prefetch_factor (int, optional, keyword-only arg) – Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers. (default value depends on the set value for num_workers. If value of num_workers=0 default is None. Otherwise, if value of num_workers > 0 default is 2).
  • persistent_workers (bool, optional) – If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False)
  • pin_memory_device (str, optional) – the device to pin_memory to if pin_memory is True.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import torch
import torchvision
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch import nn
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)
dataloader = DataLoader(dataset, batch_size=64)

class Don(nn.Module):
def __init__(self):
super(Don, self).__init__()
# 实例化了一个卷积方法
self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)

def forward(self, x):
x = self.conv1(x)
return x

don = Don()
# print(don)
writer = SummaryWriter("logs")

step = 0
for data in dataloader:
imgs, targets = data
output = don(imgs)
# torch.Size([64, 3, 32, 32])
writer.add_images("input", imgs, step)
# torch.Size([64, 6, 30, 30]) -> [xxx, 3, 30, 30]
# 6通道,无法显示,reshape一下
output = torch.reshape(output, (-1, 3, 30, 30))
writer.add_images("output", output, step)
# print(imgs.shape)
# print(output.shape)
step += 1
writer.close()

神经网络的基本骨架:

卷积层:

CONV2D:

  • CLASS: torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode=’zeros’, device=None, dtype=None)

    • in_channels (int) – Number of channels in the input image

      • 输入图像的通道数
    • out_channels (int) – Number of channels produced by the convolution

      • 输出图像的通道数 =
    • kernel_size (int or tuple) – Size of the convolving kernel

      • 卷积核的size
    • stride (int or tuple, optional) – Stride of the convolution. Default: 1

      • 卷积的步长
    • padding (int, tuple or str, optional) – Padding added to all four sides of the input. Default: 0

      • 对输入图像周围需要填充数据的层数

池化层:

MAXPOOL2D:

  • CLASS: torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    • kernel_size (Union[*int,* Tuple[*int,* int]]) – the size of the window to take a max over
      • 池化窗口大小
    • stride (Union[*int,* Tuple[*int,* int]]) – the stride of the window. Default value is kernel_size
      • 步长
    • padding (Union[*int,* Tuple[*int,* int]]) – Implicit negative infinity padding to be added on both sides
    • dilation (Union[*int,* Tuple[*int,* int]]) – a parameter that controls the stride of elements in the window
    • return_indices (bool) – if True, will return the max indices along with the outputs. Useful for torch.nn.MaxUnpool2d later
    • ceil_mode (bool) – when True, will use ceil instead of floor to compute the output shape
      • 用于决定在池化图像的边缘时,如果不够(池化窗口大小大于余下的区域时),是否还需要得出池化的数

非线性激活:

给神经网络引入非线形特质

使用各种非线形的函数就用下面的模板代码即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 例1
import torch
from torch import nn
from torch.nn import ReLU

input = torch.tensor([[1, -0.5], [-1, 3]])
input = torch.reshape(input, (-1, 1, 2, 2))
print(input.shape)

class Don(nn.Module):
def __init__(self):
super(Don, self).__init__()
# 使用ReLU方法,将所有负的都变成0
self.relu1 = ReLU()

def forward(self, input):
output = self.relu1(input)
return output

don = Don()
output = don(input)
print(output)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 例2
import torchvision.datasets
from torch import nn
from torch.nn import ReLU, Sigmoid
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter


dataset = torchvision.datasets.CIFAR10("data", train=False, download=True, transform=torchvision.transforms.ToTensor())

dataloader = DataLoader(dataset, batch_size=64)

class Don(nn.Module):
def __init__(self):
super(Don, self).__init__()
self.relu1 = ReLU()
self.sigmoid = Sigmoid()

def forward(self, input):
output = self.sigmoid(input)
return output

don = Don()

writer = SummaryWriter("logs_relu")
step = 0
for data in dataloader:
imgs, targets = data
writer.add_images("input", imgs, global_step=step)
output = don(imgs)
writer.add_images("output", output, step)
step += 1

writer.close()

线形层:

给神经网络引入线形特质

损失函数与反向传播:

  • 计算实际输出和目标之间的差距

  • 为我们更新输出提供一定的依据(反向传播)

优化器:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
import torchvision.datasets
from torch import nn, optim
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)

dataloader = DataLoader(dataset, batch_size=1)


class Don(nn.Module):
def __init__(self):
super(Don, self).__init__()
# 卷积部分
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)


def forward(self, x):
x = self.model1(x)
return x


loss = nn.CrossEntropyLoss()
don = Don()
# 优化器
optim = optim.SGD(don.parameters(), lr=0.01)

for epoch in range(20):
running_loss = 0.0
for data in dataloader:
imgs, targets = data
outputs = don(imgs)
# 计算实际输出和目标之间的差距
result_loss = loss(outputs, targets)
# 将上次的梯度清零
optim.zero_grad()
# 生成这次的梯度
result_loss.backward()
# 根据这次的梯度对数据进行调整
optim.step()
running_loss = running_loss + result_loss
print(running_loss)

现有网络模型的使用及修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import torchvision
from torch import nn

#使用 vgg16 网络模型
vgg16_true = torchvision.models.vgg16(pretrained=True)

train_data = torchvision.datasets.CIFAR10("data", train=True, transform=torchvision.transforms.ToTensor(),
download=True)

# 对该模型的参数进行修改
vgg16_true.add_module('add_linear', nn.Linear(1000, 10))
print(vgg16_true)
vgg16_true.classifier[6] = nn.Linear(4096, 10)
print(vgg16_true)

网络模型的保存与读取:

保存:

1
2
3
4
5
6
7
8
9
10
11
12
import torch
import torchvision

vgg16 = torchvision.models.vgg16(pretrained=False)

# 保存方式1
# 保存模型结构 + 模型参数
torch.save(vgg16, "vgg16_method1.pth")

# 保存方式2 (把vgg16中的参数保存在python的一个字典中)
# 仅保存了模型参数(官方推荐)
torch.save(vgg16.state_dict(), "vgg16_method2.pth")

读取:

1
2
3
4
5
6
7
8
9
10
11
12
import torch
import torchvision

# 加载模型 对应保存方式1,加载模型
model = torch.load("vgg16_method1.pth")
print(model)


# 加载模型 对应保存方式2,加载模型 (由于方式2只保存了模型的参数字典,所以读取的时候也需要设置回模型去)
vgg16 = torchvision.models.vgg16(pretrained=False)
vgg16.load_state_dict(torch.load("vgg16_method2.pth"))
print(vgg16)

完整的模型训练、验证套路:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
import torchvision.datasets
from torch.utils.tensorboard import SummaryWriter

from model import *
from torch import nn
from torch.utils.data import DataLoader

# 准备数据集,CIFAR10 数据集是PIL Image,要转换为tensor数据类型
train_data = torchvision.datasets.CIFAR10(root="data", train=True, transform=torchvision.transforms.ToTensor(),
download=True)
test_data = torchvision.datasets.CIFAR10(root="data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)

# 看一下训练数据集和测试数据集都有多少张(如何获得数据集的长度)
train_data_size = len(train_data) # length 长度
test_data_size = len(test_data)
# 如果train_data_size=10,那么打印出的字符串为:训练数据集的长度为:10
print("训练数据集的长度为:{}".format(train_data_size)) # 字符串格式化,把format中的变量替换{}
print("测试数据集的长度为:{}".format(test_data_size))

# 利用 DataLoader 来加载数据集
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

# 创建网络模型
tudui = Tudui()

# 创建损失函数
loss_fn = nn.CrossEntropyLoss() # 分类问题可以用交叉熵

# 定义优化器
learning_rate = 0.01 # 另一写法:1e-2,即1x 10^(-2)=0.01
optimizer = torch.optim.SGD(tudui.parameters(), lr=learning_rate) # SGD 随机梯度下降

# 设置训练网络的一些参数
total_train_step = 0 # 记录训练次数
total_test_step = 0 # 记录测试次数
epoch = 10 # 训练轮数

# 添加tensorboard
writer = SummaryWriter("logs_train")

for i in range(epoch):
print("----------第{}轮训练开始-----------".format(i + 1)) # i从0-9
# 训练步骤开始
for data in train_dataloader: # 从训练的dataloader中取数据
imgs, targets = data
outputs = tudui(imgs)
loss = loss_fn(outputs, targets)

# 优化器优化模型
optimizer.zero_grad() # 首先要梯度清零
loss.backward() # 反向传播得到每一个参数节点的梯度
optimizer.step() # 对参数进行优化
total_train_step += 1
if total_train_step % 100 == 0:
print("训练次数:{},loss:{}".format(total_train_step, loss.item()))
writer.add_scalar("train_loss", loss.item(), total_train_step)

# 测试步骤开始:
total_test_loss = 0
total_accuracy = 0
# 去除梯度
with torch.no_grad():
for data in test_dataloader:
imgs, targets = data
outputs = tudui(imgs)
loss = loss_fn(outputs, targets)
total_test_loss += loss
# 记录一下分类模型的准确度
accuracy = (outputs.argmax(1) == targets).sum()
total_accuracy += accuracy
print("整体测试集上的Loss:{}".format(total_test_loss))
print("整体测试集上的正确率:{}".format(total_accuracy / test_data_size))
writer.add_scalar("test_loss", total_test_loss, total_test_step)
writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)
total_test_step += 1
torch.save(tudui, "tudui_{}.pth".format(i))
print("模型已保存")

writer.close()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 搭建神经网络
import torch
from torch import nn


class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model = nn.Sequential(
nn.Conv2d(3, 32, 5, 1, 2),
nn.MaxPool2d(2),
nn.Conv2d(32, 32, 5, 1, 2),
nn.MaxPool2d(2),
nn.Conv2d(32, 64, 5, 1, 2),
nn.MaxPool2d(2),
nn.Flatten(),
nn.Linear(64 * 4 * 4, 64),
nn.Linear(64, 10)
)

def forward(self, x):
x = self.model(x)
return x

利用GPU训练:

  1. 对网络模型、数据、损失函数使用 “ .cuda() ” 方法来使用GPU
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
import torch
import torchvision.datasets
from torch.utils.tensorboard import SummaryWriter

from torch import nn
from torch.utils.data import DataLoader

# 准备数据集,CIFAR10 数据集是PIL Image,要转换为tensor数据类型
train_data = torchvision.datasets.CIFAR10(root="data", train=True, transform=torchvision.transforms.ToTensor(),
download=True)
test_data = torchvision.datasets.CIFAR10(root="data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)

# 看一下训练数据集和测试数据集都有多少张(如何获得数据集的长度)
train_data_size = len(train_data) # length 长度
test_data_size = len(test_data)
# 如果train_data_size=10,那么打印出的字符串为:训练数据集的长度为:10
print("训练数据集的长度为:{}".format(train_data_size)) # 字符串格式化,把format中的变量替换{}
print("测试数据集的长度为:{}".format(test_data_size))

# 利用 DataLoader 来加载数据集
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

# 创建网络模型
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model = nn.Sequential(
nn.Conv2d(3, 32, 5, 1, 2),
nn.MaxPool2d(2),
nn.Conv2d(32, 32, 5, 1, 2),
nn.MaxPool2d(2),
nn.Conv2d(32, 64, 5, 1, 2),
nn.MaxPool2d(2),
nn.Flatten(),
nn.Linear(64 * 4 * 4, 64),
nn.Linear(64, 10)
)

def forward(self, x):
x = self.model(x)
return x

tudui = Tudui()
if torch.cuda.is_available():
tudui = tudui.cuda()

# 创建损失函数
loss_fn = nn.CrossEntropyLoss() # 分类问题可以用交叉熵
if torch.cuda.is_available():
loss_fn = loss_fn.cuda()

# 定义优化器
learning_rate = 0.01 # 另一写法:1e-2,即1x 10^(-2)=0.01
optimizer = torch.optim.SGD(tudui.parameters(), lr=learning_rate) # SGD 随机梯度下降

# 设置训练网络的一些参数
total_train_step = 0 # 记录训练次数
total_test_step = 0 # 记录测试次数
epoch = 10 # 训练轮数

# 添加tensorboard
writer = SummaryWriter("logs_train")

for i in range(epoch):
print("----------第{}轮训练开始-----------".format(i + 1)) # i从0-9
# 训练步骤开始
for data in train_dataloader: # 从训练的dataloader中取数据
imgs, targets = data

if torch.cuda.is_available():
imgs = imgs.cuda()
targets = targets.cuda()

outputs = tudui(imgs)
loss = loss_fn(outputs, targets)

# 优化器优化模型
optimizer.zero_grad() # 首先要梯度清零
loss.backward() # 反向传播得到每一个参数节点的梯度
optimizer.step() # 对参数进行优化
total_train_step += 1
if total_train_step % 100 == 0:
print("训练次数:{},loss:{}".format(total_train_step, loss.item()))
writer.add_scalar("train_loss", loss.item(), total_train_step)

# 测试步骤开始:
total_test_loss = 0
total_accuracy = 0
# 去除梯度
with torch.no_grad():
for data in test_dataloader:
imgs, targets = data

if torch.cuda.is_available():
imgs = imgs.cuda()
targets = targets.cuda()

outputs = tudui(imgs)
loss = loss_fn(outputs, targets)
total_test_loss += loss
# 记录一下分类模型的准确度
accuracy = (outputs.argmax(1) == targets).sum()
total_accuracy += accuracy
print("整体测试集上的Loss:{}".format(total_test_loss))
print("整体测试集上的正确率:{}".format(total_accuracy / test_data_size))
writer.add_scalar("test_loss", total_test_loss, total_test_step)
writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)
total_test_step += 1
torch.save(tudui, "tudui_{}.pth".format(i))
print("模型已保存")

writer.close()
  1. 对 网络模型、数据、损失函数 使用 .to(“设备名称”) 即可