Pytorch Tutorial

Pytorch

Posted by Treaseven on December 26, 2024
torch.nn
class torch.nn.Parameter() requires_grad默认为True,在BP的过程中会对其求微分

卷积层
class torch.nn.Conv(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
池化层
class torch.nn.MaxPool(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
激活层
class torch.nn.ReLU(inplace=False)
标准化层
class torch.nn.BatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True)
线性层
class torch.nn.Linear(in_features, out_features, bias=True)
dropout层
class torch.nn.Dropout(p=0.5, inplace=False)


torch.optim
class torch.optim.Optimizer(params, defaults)
class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)
class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0)
class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_deacy=0)
class torch.optim.SGD(params, lr=0.01, mementum=0, dampening=0, weight_decay=0, nesterov=False)

torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1) 用于在训练过程中阶梯式地降低学习率
nn.utils.clip_grad_norm_ 梯度裁剪,用于防止梯度爆炸问题
torch.repeat_interleave(input, repeats, dim=None): 用于重复张量的元素
torch.scatter_add(input, dim, index, src): 用于按索引累加值到目标张量中,用于构建稀疏操作
torch.randperm: 用于生成随机排列的函数,返回一个包含从0到n-1的整数的随机排列张量
torch.arrange: 用于生成等差数列张量的函数
torch.cumsum: 沿着指定维度计算张量元素的累积求和

PAMDataLoader(tmp_set, self.infer_batch_size, self.device, self.use_workload_embedding, fea_norm_vec, fea_norm_vec_buf)