【pytorch torchvision源码解读系列—5】DenseNet

2023-05-16

pytorch框架中有一个非常重要且好用的包:torchvision,顾名思义这个包主要是关于计算机视觉cv的。这个包主要由3个子包组成,分别是:torchvision.datasets、torchvision.models、torchvision.transforms。

具体介绍可以参考官网:https://pytorch.org/docs/master/torchvision

具体代码可以参考github:https://github.com/pytorch/vision

torchvision.models这个包中包含alexnet、densenet、inception、resnet、squeezenet、vgg等常用经典的网络结构,并且提供了预训练模型,可以通过简单调用来读取网络结构和预训练模型。

今天我们来解读一下DenseNet的源码实现。如果对DenseNet不是很了解 可以查看这里的论文笔记

https://blog.csdn.net/sinat_33487968/article/details/83684453

DenseNet由Dense Block组成,而Dense Block石油DenseLayer组成,下面就是DenseLayer类,可以看到每一层卷积之前都是用了batchnorm和relu,然后就是1*1的卷积和3*3的卷积。bn_size参数是bottleneck layer瓶颈层数的乘法因子。bottleneck layer就是指在3*3的卷积之前的那层1*1的卷积,它降低输入的feature map的数量。drop_rate是drop out的概率大小。

注意到最后返回值是torch.cat([x, new_features], 1),也就是包括自己本身和提取的feature层堆叠在一起。

class _DenseLayer(nn.Sequential):
    def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):
        super(_DenseLayer, self).__init__()
        self.add_module('norm1', nn.BatchNorm2d(num_input_features)),
        self.add_module('relu1', nn.ReLU(inplace=True)),
        self.add_module('conv1', nn.Conv2d(num_input_features, bn_size *
                        growth_rate, kernel_size=1, stride=1, bias=False)),
        self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)),
        self.add_module('relu2', nn.ReLU(inplace=True)),
        self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate,
                        kernel_size=3, stride=1, padding=1, bias=False)),
        self.drop_rate = drop_rate

    def forward(self, x):
        new_features = super(_DenseLayer, self).forward(x)
        if self.drop_rate > 0:
            new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
        return torch.cat([x, new_features], 1)

有了DenseLayer,就能用它来构建Dense Block。解释一下一些参数的含义:growth_rate (int) - 代表每一层有多少个新的feature层添加进去 (paper里面是k),num_input features(int) - 输入的feature层数,num_layers(int)- 代表了Dense Block有多少层DenseLayers。

for循环就是根据num_layers逐层添加Dense Block注意每一层的输入num_input features不一样,是因为每一个DenseLayer之后feature的数目都会增加growth rate。直观一点理解,其实就是每一层的DenseLayer附带了自己学习到的feature还有自己本身一起传入下一个DenseLayer。所以后面的输出的结果会不断的增加,下图不能体现数据流动这一点,因为下图不是数据本身,而是代表卷积层。

class _DenseBlock(nn.Sequential):
    def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):
        super(_DenseBlock, self).__init__()
        for i in range(num_layers):
            layer = _DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate)
            self.add_module('denselayer%d' % (i + 1), layer)

DenseBlock的结构可以看下图。

图便宜

在Dense Block与Dense Block之间,论文添加了Transition层,这个层的作用是为了进一步提高模型的紧凑性,可以减少过渡层的特征图数量。这里直接就把num_input_features压缩成num_output_features,使用一层卷积实现的。

class _Transition(nn.Sequential):
    def __init__(self, num_input_features, num_output_features):
        super(_Transition, self).__init__()
        self.add_module('norm', nn.BatchNorm2d(num_input_features))
        self.add_module('relu', nn.ReLU(inplace=True))
        self.add_module('conv', nn.Conv2d(num_input_features, num_output_features,
                                          kernel_size=1, stride=1, bias=False))
        self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2))

接下来就可以组合成一个DenseNet。DenseNet有不同的版本,不同版本之间可以参考论文给出结构:

图片二

每个版本的共同特征就是一开始是7*7的卷积和3*3的maxpooling,中间还加了batchnorm和relu,这部分都被写进了OrderedDict里。接着就要开始遍历构造DenseBlock和Transition Layer,而不同版本的Dense Block每一层具体多少层Dense Layer,是由block_config(一个由4个整数组成的列表)决定。

这里关于Transition的操作,先判断是否是最后一个DenseBlock(最后一个DenseBlock不需要,前面不同的DenseBlock需要Transition Layer),然后就添加一层Transition Layer,输出是输入feature的一半,这样设定应该是根据实验效果确定的。但直观的理解就是这么做之后不管我们多少层DenseBlock,最后输出的数据都是紧凑的,不会无限增大它的厚度。

最后就是常规操作,添加batchnorm和一层线性分类器。后面还未每层的权重参数初始化,如果是卷积核,就使用kaiming_normal,具体可以参考pytorch官网。

class DenseNet(nn.Module):
    r"""Densenet-BC model class, based on
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        growth_rate (int) - how many filters to add each layer (`k` in paper)
        block_config (list of 4 ints) - how many layers in each pooling block
        num_init_features (int) - the number of filters to learn in the first convolution layer
        bn_size (int) - multiplicative factor for number of bottle neck layers
          (i.e. bn_size * k features in the bottleneck layer)
        drop_rate (float) - dropout rate after each dense layer
        num_classes (int) - number of classification classes
    """

    def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16),
                 num_init_features=64, bn_size=4, drop_rate=0, num_classes=1000):

        super(DenseNet, self).__init__()

        # First convolution
        self.features = nn.Sequential(OrderedDict([
            ('conv0', nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),
            ('norm0', nn.BatchNorm2d(num_init_features)),
            ('relu0', nn.ReLU(inplace=True)),
            ('pool0', nn.MaxPool2d(kernel_size=3, stride=2, padding=1)),
        ]))

        # Each denseblock
        num_features = num_init_features
        for i, num_layers in enumerate(block_config):
            block = _DenseBlock(num_layers=num_layers, num_input_features=num_features,
                                bn_size=bn_size, growth_rate=growth_rate, drop_rate=drop_rate)
            self.features.add_module('denseblock%d' % (i + 1), block)
            num_features = num_features + num_layers * growth_rate
            if i != len(block_config) - 1:
                trans = _Transition(num_input_features=num_features, num_output_features=num_features // 2)
                self.features.add_module('transition%d' % (i + 1), trans)
                num_features = num_features // 2

        # Final batch norm
        self.features.add_module('norm5', nn.BatchNorm2d(num_features))

        # Linear layer
        self.classifier = nn.Linear(num_features, num_classes)

        # Official init from torch repo.
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.constant_(m.bias, 0)

    def forward(self, x):
        features = self.features(x)
        out = F.relu(features, inplace=True)
        out = F.avg_pool2d(out, kernel_size=7, stride=1).view(features.size(0), -1)
        out = self.classifier(out)
        return out

 那么我们就来看看一个具体的版本是怎么构建的:拿DenseNet121举例,block_config=(6,12,24,16)就是上图论文中的参数。pretrained是定义是否需要预训练模型,这里不必太在意他们是怎么确定的,只要知道如果pretrained是True就返回一个已经训练好的模型。

其他不同版本的DeseNet都大同小异,只是block_config的参数不同。这里就不贴出来了。

def densenet121(pretrained=False, **kwargs):
    r"""Densenet-121 model from
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16),
                     **kwargs)
    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet121'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model

最后贴上源码:

import re
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo
from collections import OrderedDict

__all__ = ['DenseNet', 'densenet121', 'densenet169', 'densenet201', 'densenet161']


model_urls = {
    'densenet121': 'https://download.pytorch.org/models/densenet121-a639ec97.pth',
    'densenet169': 'https://download.pytorch.org/models/densenet169-b2777c0a.pth',
    'densenet201': 'https://download.pytorch.org/models/densenet201-c1103571.pth',
    'densenet161': 'https://download.pytorch.org/models/densenet161-8d451a50.pth',
}


def densenet121(pretrained=False, **kwargs):
    r"""Densenet-121 model from
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16),
                     **kwargs)
    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet121'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model


def densenet169(pretrained=False, **kwargs):
    r"""Densenet-169 model from
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 32, 32),
                     **kwargs)
    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet169'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model


def densenet201(pretrained=False, **kwargs):
    r"""Densenet-201 model from
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 48, 32),
                     **kwargs)
    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet201'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model


def densenet161(pretrained=False, **kwargs):
    r"""Densenet-161 model from
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = DenseNet(num_init_features=96, growth_rate=48, block_config=(6, 12, 36, 24),
                     **kwargs)
    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet161'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model


class _DenseLayer(nn.Sequential):
    def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):
        super(_DenseLayer, self).__init__()
        self.add_module('norm1', nn.BatchNorm2d(num_input_features)),
        self.add_module('relu1', nn.ReLU(inplace=True)),
        self.add_module('conv1', nn.Conv2d(num_input_features, bn_size *
                        growth_rate, kernel_size=1, stride=1, bias=False)),
        self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)),
        self.add_module('relu2', nn.ReLU(inplace=True)),
        self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate,
                        kernel_size=3, stride=1, padding=1, bias=False)),
        self.drop_rate = drop_rate

    def forward(self, x):
        new_features = super(_DenseLayer, self).forward(x)
        if self.drop_rate > 0:
            new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
        return torch.cat([x, new_features], 1)


class _DenseBlock(nn.Sequential):
    def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):
        super(_DenseBlock, self).__init__()
        for i in range(num_layers):
            layer = _DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate)
            self.add_module('denselayer%d' % (i + 1), layer)


class _Transition(nn.Sequential):
    def __init__(self, num_input_features, num_output_features):
        super(_Transition, self).__init__()
        self.add_module('norm', nn.BatchNorm2d(num_input_features))
        self.add_module('relu', nn.ReLU(inplace=True))
        self.add_module('conv', nn.Conv2d(num_input_features, num_output_features,
                                          kernel_size=1, stride=1, bias=False))
        self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2))


class DenseNet(nn.Module):
    r"""Densenet-BC model class, based on
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
    Args:
        growth_rate (int) - how many filters to add each layer (`k` in paper)
        block_config (list of 4 ints) - how many layers in each pooling block
        num_init_features (int) - the number of filters to learn in the first convolution layer
        bn_size (int) - multiplicative factor for number of bottle neck layers
          (i.e. bn_size * k features in the bottleneck layer)
        drop_rate (float) - dropout rate after each dense layer
        num_classes (int) - number of classification classes
    """

    def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16),
                 num_init_features=64, bn_size=4, drop_rate=0, num_classes=1000):

        super(DenseNet, self).__init__()

        # First convolution
        self.features = nn.Sequential(OrderedDict([
            ('conv0', nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),
            ('norm0', nn.BatchNorm2d(num_init_features)),
            ('relu0', nn.ReLU(inplace=True)),
            ('pool0', nn.MaxPool2d(kernel_size=3, stride=2, padding=1)),
        ]))

        # Each denseblock
        num_features = num_init_features
        for i, num_layers in enumerate(block_config):
            block = _DenseBlock(num_layers=num_layers, num_input_features=num_features,
                                bn_size=bn_size, growth_rate=growth_rate, drop_rate=drop_rate)
            self.features.add_module('denseblock%d' % (i + 1), block)
            num_features = num_features + num_layers * growth_rate
            if i != len(block_config) - 1:
                trans = _Transition(num_input_features=num_features, num_output_features=num_features // 2)
                self.features.add_module('transition%d' % (i + 1), trans)
                num_features = num_features // 2

        # Final batch norm
        self.features.add_module('norm5', nn.BatchNorm2d(num_features))

        # Linear layer
        self.classifier = nn.Linear(num_features, num_classes)

        # Official init from torch repo.
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.constant_(m.bias, 0)

    def forward(self, x):
        features = self.features(x)
        out = F.relu(features, inplace=True)
        out = F.avg_pool2d(out, kernel_size=7, stride=1).view(features.size(0), -1)
        out = self.classifier(out)
        return out

 

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

【pytorch torchvision源码解读系列—5】DenseNet 的相关文章

随机推荐

  • 为什么越厉害的大厂,校招越不看重考试成绩?

    前几天赵同学告诉我 xff0c 他没有通过那家心仪的公司笔试 赵同学成绩不错 xff0c 每次都是专业前五 xff0c 但笔试中有一道 银行家算法实现 题 xff0c 他一点也没写出来 这就是大厂招聘不看重成绩单的原因 xff1a 招人是为
  • 我的2011——毕业之年的总结与彷徨

    题记 眼看2011即将成为过去 xff0c 难得在这最后的时刻 xff0c 抽点时间 xff0c 倒上一杯热茶 xff0c 回忆这一年的浮浮沉沉 这一年 xff0c 我和所有毕业生一样 xff0c 离开了呆了四年的大学校园 呆腻了校园的生活
  • centos安装anaconda教程

    1 更新yum 命令 xff1a sudo yum update 2 安装anaconda 2 1 查看anaconda对应python版本 我选的3 8版 Old package lists Anaconda documentation
  • Android布局 -- Navigation实现底部导航栏

    底部导航栏加页卡的切换 xff0c 很多App采用这种布局设计 xff0c 在以前的开发中 xff0c 需要自定义底部导航栏以及使用FragmentTransaction来管理Fragment的切换 xff0c 代码量较大 xff0c 而使
  • ViewModelProviders is deprecated

    原有的创建ViewModel的方法 xff1a viewModel 61 ViewModelProviders of this get ViewModel class 提示ViewModelProviders过时 改为 xff1a view
  • Android Fragment退出 返回上一个Fragment与直接退出

    例如应用底部有两个导航按钮A与B xff0c 刚进入的时候显示为第一个AFragment xff0c 点击B切换到BFragment 如果需求是在BFragment点击返回键回到AFragment xff0c 需要配置 app defaul
  • Android基础 -- 子线程可以修改UI吗?

    子线程可以修改UI吗 xff1f 为什么会产生这样的问题 xff0c 可能是因为在开发过程中遇到了 34 Only the original thread that created a view hierarchy can touch it
  • leetcode 417. 太平洋大西洋水流问题

    https leetcode cn com problems pacific atlantic water flow 思路是从海洋开始逆流 如果可以逆流到 就标记为1 然后检查两个海洋都可以逆流到的区域 DFS public List lt
  • Android模拟器检测常用方法

    在Android开发过程中 xff0c 防作弊一直是老生常谈的问题 xff0c 而模拟器的检测往往是防作弊中的重要一环 xff0c 接下来有关于模拟器的检测方法 xff0c 和大家进行一个简单的分享 1 传统的检测方法 传统的检测方法主要是
  • RecyclerView 隐藏部分分割线

    在项目中遇到复杂点的RecyclerView xff0c 可能会有隐藏部分分割线的需求 xff0c 例如item1和item3之间的分割线隐藏 xff0c item4和item5之间的分割线隐藏等 在看了文档里的ItemDecoration
  • 浅谈去中心化应用

    1 中心化应用 现在我们所使用的应用基本上都是中心化的应用 xff0c 什么是中心化应用呢 xff0c 举个栗子 xff0c 我们在天猫买东西的时候 xff0c 需要先付款给支付宝 xff0c 然后卖家发货 xff0c 我们确认收货之后 x
  • Java二分搜索树及其添加删除遍历

    对于树这种结构 xff0c 相信大家一定耳熟能详 xff0c 二叉树 二分搜索树 AVL树 红黑树 线段树 Trie等等 xff0c 但是对于树的应用以及编写一棵解决特定问题的树 xff0c 不少同学都会觉得不是一件简单的事情 xff0c
  • 游戏平台SDK设计和开发之旅——XSDK功能点梳理

    做游戏开发或者相关工作的同学 xff0c 可能都知道 xff0c 在游戏上线之前 xff0c 需要将游戏分发到各大渠道平台 xff0c 比如九游 xff0c 百度 xff0c 360 xff0c 华为等等 其中和技术相关的事情 xff0c
  • 谈谈 GitHub 开放私有仓库一事的影响

    GitHub 此次宣布免费开放私有仓库 xff0c 在我看来有以下几点影响 xff1a 缓和与同类产品间的竞争压力小部分个人项目由开源转闭源微软在技术社区中的企业形象进一步强化为未来的企业服务预热 下面根据以上几点 xff0c 我来简单谈下
  • 每天坚持刷 LeetCode 的人,究竟会变得有多强... 学习技巧都藏在这几个公众号里面了......

    信息爆炸时代 xff0c 与其每天被各种看过就忘的内容占据时间 xff0c 不如看点真正对你有价值的信息 xff0c 下面小编为你推荐几个高价值的公众号 xff0c 它们提供的信息能真正提高你生活的质量 人工智能爱好者社区 专注人工智能 机
  • 超酷炫!智能无人机中文教程重磅上线!

    前 言 对于大多数无人机爱好者来说 xff0c 能自己从头开始组装一台无人机 xff0c 之后加入 AI 算法 xff0c 能够航拍 xff0c 可以目标跟踪 xff0c 是心中的梦想 并且 xff0c 亲自从零开始完成复杂系统 xff0c
  • B 站硬件大佬又在 GitHub 上开源了一款神器...

    公众号关注 GitHubDaily 设为 星标 xff0c 每天带你逛 GitHub xff01 转自量子位 这次 xff0c 野生钢铁侠稚晖君带着他的硬核项目又来了 上次自制纯手工打造 AI 小电视 xff0c 播放量就超过 300 万
  • 用 C 语言来刷 LeetCode,网友直呼:那是真的牛批...

    公众号关注 GitHubDaily 设为 星标 xff0c 每天带你逛 GitHub xff01 大家好 xff0c 我是小 G 如果你是计算机科班出身 xff0c 那么 C 语言 xff0c 估计是你在初入编程时 xff0c 最早接触的编
  • 【pytorch torchvision源码解读系列—3】Inception V3

    框架中有一个非常重要且好用的包 xff1a torchvision xff0c 顾名思义这个包主要是关于计算机视觉cv的 这个包主要由3个子包组成 xff0c 分别是 xff1a torchvision datasets torchvisi
  • 【pytorch torchvision源码解读系列—5】DenseNet

    pytorch框架中有一个非常重要且好用的包 xff1a torchvision xff0c 顾名思义这个包主要是关于计算机视觉cv的 这个包主要由3个子包组成 xff0c 分别是 xff1a torchvision datasets to