卷积神经网络经典论文的学习笔记

2023-10-28

1 Optimization algorithm And Regularization

1.1 Optimization algorithm

  • GD(SGD、MBGD、BGD)

  • SGDM (Momentum)

    Vdω=βVdω+(1β)dω V d ω = β ∗ V d ω + ( 1 − β ) ∗ d ω

    Vdb=βVdb+(1β)db V d b = β ∗ V d b + ( 1 − β ) ∗ d b

    ω:=ωαVdω ω := ω − α ∗ V d ω

    b:=bαVdb b := b − α ∗ V d b

    ( β=0.9 β = 0.9 )

  • RMSPROP

    Sdw=βSdw+(1β)(dω)2 S d w = β ∗ S d w + ( 1 − β ) ∗ ( d ω ) 2

    Sdb=βSdb+(1β)(db)2 S d b = β ∗ S d b + ( 1 − β ) ∗ ( d b ) 2

    ω:=ωαdωSdw+ϵ ω := ω − α ∗ d ω S d w + ϵ

    b:=bαdbSb+ϵ b := b − α ∗ d b S b + ϵ

    ( ϵ=108 ϵ = 10 − 8 )

  • Adam

    Vdω=β1Vdω+(1β1)dω V d ω = β 1 ∗ V d ω + ( 1 − β 1 ) ∗ d ω

    Vdb=β1Vdb+(1β1)db V d b = β 1 ∗ V d b + ( 1 − β 1 ) ∗ d b

    Sdw=β2Sdw+(1β2)(dω)2 S d w = β 2 ∗ S d w + ( 1 − β 2 ) ∗ ( d ω ) 2

    Sdb=β2Sdb+(1β2)(db)2 S d b = β 2 ∗ S d b + ( 1 − β 2 ) ∗ ( d b ) 2

    Vcorrectdω=Vdω1βt1 V d ω c o r r e c t = V d ω 1 − β 1 t Vcorrectdb=Vdb1βt1 V d b c o r r e c t = V d b 1 − β 1 t
    Scorrectdω=Sdω1βt2 S d ω c o r r e c t = S d ω 1 − β 2 t Scorrectdb=Sdb1βt2 S d b c o r r e c t = S d b 1 − β 2 t
    ω:=ωαVcorrectdωScorrectdω+ϵ ω := ω − α ∗ V d ω c o r r e c t S d ω c o r r e c t + ϵ
    b:=bαVcorrectdbScorrectb+ϵ b := b − α ∗ V d b c o r r e c t S b c o r r e c t + ϵ

    ( β1=0.9β2=0.99ϵ=108 β 1 = 0.9 β 2 = 0.99 ϵ = 10 − 8 ​ )

1.2 Regularization

  • Batch Norm

    μ=1mizi μ = 1 m ∗ ∑ i z i

    σ2=1mi(ziμ)2 σ 2 = 1 m ∗ ∑ i ( z i − μ ) 2

    zinorm=ziμσ2+ϵ z n o r m i = z i − μ σ 2 + ϵ

    z^inorm=γzinorm+β z ^ n o r m i = γ ∗ z n o r m i + β

    γ and β γ   a n d   β are used to adjust the expectation and variance)

  • Dropout

    keep_prob=0.8
    d3 = np.random.rand( a3.shape[0],a3.shape[1]) < keep_prob # activate 80%
    a3 =np.multiply(a3,d3) 
    a3/=keep_prob #使激活值期望不变

    2 Convolutional Neural Network

2.1 Convolution

  • Convolution

    f[l]=filter sizep[l]=paddings[l]=stride f [ l ] = f i l t e r   s i z e p [ l ] = p a d d i n g s [ l ] = s t r i d e

    Input: n[l1]Hn[l1]wn[l1]c n H [ l − 1 ] ∗ n w [ l − 1 ] ∗ n c [ l − 1 ] (height& width& channel)

    Output: n[l]Hn[l]wn[l]c n H [ l ] ∗ n w [ l ] ∗ n c [ l ]

    n[l]H=n[l1]H+2p[l]f[l]s[l]+1 n H [ l ] = ⌊ n H [ l − 1 ] + 2 p [ l ] − f [ l ] s [ l ] + 1 ⌋

    n[l]w=n[l1]w+2p[l]f[l]s[l]+1 n w [ l ] = ⌊ n w [ l − 1 ] + 2 p [ l ] − f [ l ] s [ l ] + 1 ⌋

if Padding=’VALID’ ,then output is to be : n=n[l1]wf[l]s[l]+1 n = ⌊ n w [ l − 1 ] − f [ l ] s [ l ] + 1 ⌋

if Padding=’SAME’ ,then output is to be :

n=n[l1]ws[l]+1n[l1]ws[l]if (nf)%s >0else n = { n w [ l − 1 ] s [ l ] + 1 if  ( n − f ) % s  >0 n w [ l − 1 ] s [ l ] else

Sparse connection & Weight sharing

  • Pooling
    f f : filter size

    s: stride

    nH=nHfs+1 n H = ⌊ n H − f s + 1 ⌋

    nw=nwfs+1 n w = ⌊ n w − f s + 1 ⌋

  • Fully connected

General development process

这里写图片描述

2.2 Deconvolution

Convolution process:

input: 44 4 ∗ 4 filter: 33 3 ∗ 3 (s=1,p=0) output: (43+1)(43+1)=22 ( 4 − 3 + 1 ) ∗ ( 4 − 3 + 1 ) = 2 ∗ 2

if roll x to 16*1, then y to 4*1, we can define the convolution process : y=C*x. (C is the following matrix which is 4*16) (4,16)*(16,1)=(4,1)

这里写图片描述
de-convolution process:

input: 2*2 filter:3*3 s=1 padding=2(full padding) output:4*4

If roll y to 4*1, x to 16*1. then x= CTy C T ∗ y (16,4)*(4,1)=(16,1)

2.3 Classical convolution networks

2.3.1 AlexNet- ImageNet Classification with Deep Convolutional Neural Networks

Structure:

​ Below, the net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels.


​ The first convolutional layer filters the 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels .The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5 × 5 × 48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3 × 3 ×256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3 × 3 × 192 , and the fifth convolutional layer has 256 kernels of size 3 × 3 × 192. The fully-connected layers have 4096 neurons each. As following:

ReLU Nonlinearity

Local Response Normalization:

bix,y=aix,y/(k+αj=max(0,in/2)min(N1,i+n/2)(aix,y)2)β b x , y i = a x , y i / ( k + α ∑ j = m a x ( 0 , i − n / 2 ) m i n ( N − 1 , i + n / 2 ) ( a x , y i ) 2 ) β

Data Augmentation:

  • Extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images and training our network on these extracted patches4. This increases the size of our training set by a factor of 2048 。
  • Perform PCA on the set of RGB pixel values throughout the ImageNet training set.

2.3.2 VGG- VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION

ConvNet configurations:


​ We use very small 3 × 3 receptive fields throughout the whole net, which are convolved with the input at every pixel (with stride 1). It is easy to see that a stack of two 3× 3 conv-layers (without spatial pooling in between) has an effective receptive field of 5× 5; three such layers have a 7 × 7 effective receptive field . But why they do this change? First, three non-linear rectification layers instead of a single one, which makes the decision function more discriminative. Second, we decrease the number of parameters: three-layer 3 × 3 convolution stack contains parameters 3*3*3=27, but 7 × 7 convolution contains parameters 7*7=49.

​ For example, two 3× 3 conv-layers has an effective receptive field of 5× 5.

​ Moreover, the incorporation of 1 × 1 conv. layers is a way to increase the nonlinearity of the decision function without affecting the receptive fields of the conv. layers.

ConvNet performance at multiple test scales:

这里写图片描述

2.3.3 NIN- Network in Network

MLP Convolution Layers :

​ The resulting structure which we call an mlpconv layer is compared with CNN in picture below:

这里写图片描述

​ The feature maps are obtained by sliding the MLP over the input in a similar manner as CNN and fed into the next layer. Using the relu as an example represents the classic CNN, the feature map can be calculated as follows:

fi,j,k=max(wTkxi,j,0) f i , j , k = m a x ( w k T x i , j , 0 )

​ Maximization over linear functions makes a piecewise linear approximator which is capable of approximating any convex functions. Compared to conventional convolutional layers which perform linear separation, the maxout network is more potent as it can separate concepts that lie within convex sets.


​ The feature maps of maxout layers are calculated as follows:
fi,j,k=maxm(wTkmxi,j) f i , j , k = max m ( w k m T x i , j )

​ We seek to achieve this by introducing the novel “Network In Network” structure, in which a micro network is introduced within each convolutional layer to compute more abstract features for local patches.

这里写图片描述

​ The calculation performed by mlpconv layer is shown as follows:

这里写图片描述

​ The following picture is prone to understanding:

(Statement: The picture is transfered form others)

Global Average Pooling :

​Conventional convolutional neural networks perform convolution in the lower layers of the network. For classification, the feature maps of the last convolutional layer are vectorized and fed into fully connected layers followed by a softmax logistic regression layer. In this paper, we propose another strategy called global average pooling to replace the traditional fully connected layers in CNN.

​The idea is to generate one feature map for each corresponding category of the classification tasks in the last mlpconv layer. Instead of adding fully connection layers , we take the average of each feature map, and the resulting vector fed directly into the softmax layer.

​The disadvantage of fully connection layers:

  • too many parameters

  • be prone to over-fitting

The advantage of global average pooling:

  • more native
  • no parameters to optimize
  • sums out the spatial information, thus it is more robust to spatial transformation of the input.

2.3.4 Inception V3

General Design Principles:

  1. Avoid representational bottlenecks, especially early in the network. Theoretically, information content can not be assessed merely by the dimensionality of the representation as it discards important factors like correlation structure; the dimensionality merely provides a rough estimate of information content.

  2. Higher dimensional representations are easier to process locally within a network.

  3. Spatial aggregation can be done over lower dimensional embeddings without much or any loss in representational power.

  4. Balance the width and depth of the network. The computational budget should therefore be distributed in a balanced way between the depth and width of the network.

Factorizing Convolutions with Large Filter Size:

​ Two ways to factorize convolution:

  • Sliding this small network over the input activation grid boils down to replacing the 5 × 5 convolution with two layers of 3 × 3 convolution .
  • using a 3 × 1 convolution followed by a 1 × 3 convolution is equivalent to sliding a two layer network with the same receptive field as in a 3 × 3 convolution .

​ We can get:

Efficient Grid Size Reduction :

​ We can use two parallel stride 2 blocks: P and C. P is a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the filter banks of which are concatenated as in figure [10].

这里写图片描述

这里写图片描述

Inception-v2 :

这里写图片描述

Model Regularization via Label Smoothing :

​ For each training example x, our model computes the probability of each label k , p(k|x)=ezkKi=1ezi p ( k | x ) = e z k ∑ i = 1 K e z i . Consider the ground-truth distribution over labels q(k|x) q ( k | x ) for this training example, normalized so that Ki=1q(k|x)=1 ∑ i = 1 K q ( k | x ) = 1 .

cross-entropy : l=Kk=1log(p(k))q(k) l = − ∑ k = 1 K l o g ( p ( k ) ) q ( k ) .

​ Consider a distribution over labels u(k) u ( k ) , independent of the training example x, and a smoothing parameter ϵ ϵ . For a training example with ground-truth label y, we replace the label distribution q(k|x)=δk,y q ( k | x ) = δ k , y with

q(k|x)=(1ϵ)δk,y+ϵu(k) q ′ ( k | x ) = ( 1 − ϵ ) δ k , y + ϵ u ( k )

​ we used the uniform distribution u(k)=1/K u ( k ) = 1 / K , δk,y=q(x) δ k , y = q ( x ) (which equals 1 for k = y and 0 otherwise. )
H(q,p)=k=1Klog(p(k))q(k)=(1ϵ)H(q,p)+ϵH(u,p) H ( q ′ , p ) = − ∑ k = 1 K l o g ( p ( k ) ) q ′ ( k ) = ( 1 − ϵ ) H ( q , p ) + ϵ H ( u , p )

Training Methodology :

​ batch_size=32, RMSProp decay=0.9, ϵ=1.0 ϵ = 1.0 . learning_rate-0.045, decayed every two epoch using an exponential rate of 0.94 ,gradient threshold 2.0.

2.3.5 ResNet- Deep Residual Learning for Image Recognition

​ There exists a solution by construction to the deeper model: the added layers are identity mapping , and the other layers are copied from the learned shallower model.

​ Formally, denoting the desired underlying mapping as H(x) H ( x ) ,we let the stacked nonlinear layers fit another mapping of F(x):=H(x)x F ( x ) := H ( x ) − x . The original mapping is recast into F(x)+x F ( x ) + x .

这里写图片描述

​ We consider a building block defined as:

y=F(x,Wi)+x y = F ( x , W i ) + x

​ Here x and y are the input and output vectors of the layers considered. As shown above, there are two layers, F=w2σ(w1x) F = w 2 σ ( w 1 x ) in which σ σ denotes Relu and the biases are omitted for simplifying notations. The dimensions of x and F must be equal in Eqn.(3). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws W s by the shortcut connections to match the dimensions:

y=F(x,Wi)+Wsx y = F ( x , W i ) + W s x

Architectures for ResNet :

这里写图片描述

Deeper Bottleneck Architectures :

​ Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design. For each residual function F, we use a stack of 3 layers instead of 2 . The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring)
dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. An example as shown below, where both designs have similar time complexity.

3 Neural Style Transfer

3.1 A Neural Algorithm of Artistic Style(L. A. Gatys)

Lcontent(C,G)=14n[l]Hn[l]Wn[l]Cij(a[l][C]ija[l][G]ij)2 L c o n t e n t ( C , G ) = 1 4 n H [ l ] n W [ l ] n C [ l ] ∑ i j ( a i j [ l ] [ C ] − a i j [ l ] [ G ] ) 2

  • a[l][C]ij a i j [ l ] [ C ] is the activation of the ith i t h filter at position j j in layer l of content image.

  • Let C C and G be the original image and the image that is generated .

G[l][S]kk=i=1n[l]Hi=1n[l]Wa[l][S]i,j,ka[l][S]i,j,kG[l][G]kk=i=1n[l]Hi=1n[l]Wa[l][G]i,j,ka[l][G]i,j,k G k k ′ [ l ] [ S ] = ∑ i = 1 n H [ l ] ∑ i = 1 n W [ l ] a i , j , k [ l ] [ S ] a i , j , k ′ [ l ] [ S ] G k k ′ [ l ] [ G ] = ∑ i = 1 n H [ l ] ∑ i = 1 n W [ l ] a i , j , k [ l ] [ G ] a i , j , k ′ [ l ] [ G ]

Lstyle(S,G)=l=0Lωl1(2n[l]Hn[l]Wn[l]C)2kk(G[l][S]kkG[l][G]kk)2 L s t y l e ( S , G ) = ∑ l = 0 L ω l 1 ( 2 n H [ l ] n W [ l ] n C [ l ] ) 2 ∑ k ∑ k ′ ( G k k ′ [ l ] [ S ] − G k k ′ [ l ] [ G ] ) 2

  • G[l][S]kk G k k ′ [ l ] [ S ] and G[l][G]kk G k k ′ [ l ] [ G ] give these feature correlations which is the inner product between the feature map k k and k in layer l l .
  • ωl are weighting factors of the contribution of each layer to the total loss .
  • n[l]Hn[l]Wn[l]C n H [ l ] n W [ l ] n C [ l ] give the feature map the height, width and the number of channels.

L(G)=αLcontent(C,G)+βLstyle(S,G) L ( G ) = α L c o n t e n t ( C , G ) + β L s t y l e ( S , G )

  • α α and β are the weighting factors for content and style reconstruction respectively.

4 Autoencoder

4.1 Stacked Denoising Autoencoders: Learning Useful Representations …

All appear build on the same principle that we may summarize as follows:

这里写图片描述

Traditional Autoencoders (AE) :

y=fθ(x)=s(Wx+b)x^=gθ(y)=s(Wx+b) y = f θ ( x ) = s ( W x + b ) x ^ = g θ ′ ( y ) = s ( W ′ x + b ′ )

Loss(x,z)=(xz)2orLoss(x,z)=cross entropy(x,z) L o s s ( x , z ) = ( x − z ) 2 o r L o s s ( x , z ) = c r o s s   e n t r o p y ( x , z )

The Denoising Autoencoder Algorithm

这里写图片描述

The key difference is that z z is now a deterministic function of x^ rather than x x

We emphasize here that our goal is not the task of denoising per se. Rather denoising is advocated and investigated as a training criterion for learning to extract useful features that will constitute better higher level representation. The usefulness of a learnt representation can then be assessed objectively by measuring the accuracy of a classifier that uses it as input.

Geometric Interpretation

这里写图片描述

Stacking denoising autoencoders

这里写图片描述

Fine-tuning of a deep network for classification

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

卷积神经网络经典论文的学习笔记 的相关文章

  • 西瓜书学习笔记第5章【神经网络】

    西瓜书学习笔记第5章 神经网络 5 1神经元模型 5 2 感知机与多层网络 一 感知机 二 多层功能神经元 多层网络 5 3误差逆传播算法 反向传播 BP 算法 对各个参数更新公式的推导 早停 early stopping 正则化 regu
  • 几种经典的卷积神经网络模型

    目录 1 卷积神经网络解决的问题 2 经典的卷积神经网络 2 1 LeNet 2 2 AlexNet 2 3 VGG 2 3 1 VGG块 2 3 2 VGG网络 2 4 NiN 2 4 1 Nin块 2 4 2 Nin网络 2 5 Goo
  • ConvNeXt网络详解

    ConvNeXt 论文名称 A ConvNet for the 2020s 论文下载链接 https arxiv org abs 2201 03545 论文对应源码链接 https github com facebookresearch C
  • 深度学习课程作业——手写数字识别(卷积神经网络)

    本实验过程需要用到torchvision包 没有安装的小伙伴 windows用户可直接使用cmd命令 输入命令行pip install torchvision即可 仍安装不了的 建议csdn直接查找安装教程 一 加载数据集 1 1 导入实验
  • 花卉识别卷积神经网络

    卷积神经网络做的花卉识别 keras 五分类 向日葵 雏菊 郁金香 玫瑰 蒲公英 之后更怎么做的
  • 基于ResNet的MSTAR数据集目标分类

    基于ResNet的MSTAR数据集目标分类 文章目录 基于ResNet的MSTAR数据集目标分类 说在前面 1 MSART数据集介绍 2 SAR目标分类网络 3 ResNet代码及训练 4 结尾 附录 代码 说在前面 前两篇文章主要讨论了C
  • GitHub 标星 2.5K+,U^2-Net 跨界肖像画,完美复刻人物细节!

    来源 Jack Cui 头图 CSDN下载自视觉中国 今年提出的 U 2 Net 显著性检测算法 刷爆了 reddit 和 twitter 号称是 2020 年 地表最强 的静态背景分割算法 可以看下效果 你以为今天要讲分割 错 U 2 N
  • 毕业设计-基于深度学习的命名实体识别研究

    目录 目录 前言 课题背景和意义 实现技术思路 一 命名实体识别简单概述 二 基于深度学习的命名实体识别方法 实现结果 最后 前言 大四是整个大学期间最忙碌的时光 一边要忙着备考或实习为毕业后面临的就业升学做准备 一边要为毕业设计耗费大量精
  • 卷积神经网络详解

    卷积神经网络 Convolutional Neural Networks CNN 是应用最多 研究最广的一种神经网络 卷积神经网络 以下简称CNN 主要用于图片分类 自动标注以及产品推荐系统中 以CNN实现图片分类为例 图像经过多个卷积层
  • 卷积神经网络中图像池化操作全解析

    一 池化的过程 卷积层是对图像的一个邻域进行卷积得到图像的邻域特征 亚采样层 池化层 就是使用pooling技术将小邻域内的特征点整合得到新的特征 在完成卷积特征提取之后 对于每一个隐藏单元 它都提取到 r a 1 c b 1 个特征 把它
  • 基于TensorFlow的花卉识别

    概要设计 数据分析 本次设计的主题是花卉识别 数据为TensorFlow的官方数据集flower photos 包括5种花卉 雏菊 蒲公英 玫瑰 向日葵和郁金香 的图片 并有对应类别的标识 daisy dandelion roses sun
  • 深度卷积神经网络(CNN)

    CNN简述 卷积神经网络 Convolutional Neural Network CNN 它是属于前馈神经网络的一种 其特点是每层的神经元节点只响应前一层局部区域范围内的神经元 全连接网络中每个神经元节点则是响应前一层的全部节点 一个深度
  • R语言KERAS深度学习CNN卷积神经网络分类识别手写数字图像数据(MNIST)

    最近我们被客户要求撰写关于卷积神经网络的研究报告 包括一些图形和统计输出 在本文中 我们将学习如何使用keras 用手写数字图像数据集 即MNIST 进行深度学习 本文的目的是为了让大家亲身体验并熟悉培训课程中的神经网络部分 1 软件包的下
  • 基于卷积的图像分类识别(六):DenseNet & FractalNet

    系列文章目录 本专栏介绍基于深度学习进行图像识别的经典和前沿模型 将持续更新 包括不仅限于 AlexNet ZFNet VGG GoogLeNet ResNet DenseNet SENet MobileNet ShuffleNet Eif
  • 【目标检测】单阶段算法--YOLOv3详解

    论文题目 YOLOv3 An Incremental Improvement 论文地址 https pjreddie com media files papers YOLOv3 pdf 一文读懂YOLOv1 YOLOv1 一文读懂YOLOv
  • 卷积神经网络之计算机视觉应用(一)

    卷积神经网络之计算机视觉应用 一 一 引言 21世纪开始 卷积神经网络就被成功的大量用于检测 分割 物体识别以及图像的各个领域 值得一提的是 图像可以在像素级别进行打标签 这样就可以应用在比如自动电话接听机器人 自动驾驶汽车等技术中 尽管卷
  • 毕业设计-基于深度学习的人脸识别方法

    目录 前言 课题背景和意义 实现技术思路 一 人脸识别介绍 二 基于深度学习的人脸识别方法 实现效果图样例 最后 前言 大四是整个大学期间最忙碌的时光 一边要忙着备考或实习为毕业后面临的就业升学做准备 一边要为毕业设计耗费大量精力 近几年各
  • 神经网络训练中batch的作用(从更高角度理解)

    1 什么是batch batch 翻译成汉语为批 一批一批的批 在神经网络模型训练时 比如有1000个样本 把这些样本分为10批 就是10个batch 每个批 batch 的大小为100 就是batch size 100 每次模型训练 更新
  • Tensorflow--------tf.nn库

    1 tf nn 提供神经网络相关操作 包括卷积神经 conv 池化操作 pooling 归一化 loss 分类操作 embedding RNN Evaluation 2 tf layers 高层的神经网络 和卷积神经有关 3 tf cont
  • 【阅读笔记】联邦学习实战——联邦学习医疗健康应用案例

    联邦学习实战 联邦学习医疗健康应用案例 前言 1 医疗健康数据概述 2 联邦医疗大数据与脑卒中预测 2 1 联邦数据预处理 2 2 联邦学习脑卒中预测系统 3 联邦学习在医疗影像中的应用 3 1 肺结节案例描述 3 2 数据概述 3 3 联

随机推荐

  • react的onClick自动触发等相关问题

    react分页组件遇到的问题 private getFirst const pageNo this state if pageNo gt 3 return span 首页 span else return private changePag
  • 基于openlayers的最短路径规划

    之前的文章讲到了如何构建空间数据库 矢量数据如何入库 如何构建拓扑网络 如何自定义查询函数 如何构建wms服务 本文讲解如何基于openlayers晚上最短路径规划功能 一 基于openlayers3 1 构建网页 这里只是一个简单的网页
  • idea 创建mybatis xml文件时找不到

    1 File gt Settings 如图 2 添加模板 如下图 3 添加xml模板 如下图 模板内容
  • 编译openwrt全过程(超详细)

    本教程的编译环境 win7 专业版 VMwareWorkstation6 5虚拟机 Ylmf OS 3 0 编译的过程中要保持电脑联网 搭建编译环境 应用程序 附件 终端 sudo apt get update 更新 安装编译需要的组件 s
  • 前端最新一面

    vue2 vue3 v model的区别 instance of原理 事件循环机制打印顺序 打印结果 1 3
  • 手机上编写Java程序的软件

    对于程序员来说 编写代码几乎都是在电脑上 但有时候在一些特殊情况下 没有电脑 或者不方便带电脑 这时就想 要是能在手机上写代码该多好啊 以前我也折腾过 找过许多软件 但感觉不如我意 但我并没有放弃 在浏览YouTube的时候 偶然发现了一款
  • 低功耗技术(三)UPF的使用

    UPF是一个统一的 被广泛应用的低功耗实现标准 它用一些标准的语言描述用户的低功耗设计意图 一 UPF所需要的特殊单元库 1 Level Shifter和Isolation Cell 对于多电压设计 需要用Level shifter来实现不
  • SpringBoot整合JWT和MD5实现单点登录

    1 知识点 1 1 JWT JSON Web Token JSON Web令牌 是一个开放标准 rfc7519 它定义了一种紧凑的 自包含的方式 用于在各方之间以JSON对象安全地传输信息 此信息可以验证和信任 因为它是数字签名的 jwt可
  • Android开发实践:Java层与Jni层的数组传递

    http www linuxidc com Linux 2014 03 97561 htm Android开发中 经常会在Java代码与Jni层之间传递数组 byte 一个典型的应用是Java层把需要发送给客户端的数据流传递到Jni层 由J
  • 边缘检测、Hough变换、轮廓提取、种子填充、轮廓跟踪

    转自 http blog sina com cn s blog 6c083cdd0100nm4s html 7 1 边沿检测 我们给出一个模板 和一幅图象 不难发现原图中左边暗 右边亮 中间存在着一条明显的边界 进行模板操作后的结果如下 可
  • 关于Could not dlopen library和Cannot dlopen some GPU libraries等问题的解决建议

    在使用服务器的tensorflow1 14时 发现GPU内存占用很低 才100多M 而且GPU利用率为0 这根本没用GPU跑啊 回过头看编译输出的报告 出现了类似于 Could not dlopen library libcufft so
  • 进入Docker容器内部

    Docker容器运行起来以后 要想进入容器内部可以先通过docker ps命令查看 当前运行的容器信息 再通过 docker exec it ec3d30bff042 命令 其中ec3d30bff042为容器ID eg docker exe
  • 题解 . 洛谷题单之动态规划的引入

    前置知识 数字三角形问题 动态规划之数字三角形模型 如何何何的博客 CSDN博客 01背包问题 动态规划之01背包模型 如何何何的博客 CSDN博客 完全背包问题 动态规划之完全背包模型 如何何何的博客 CSDN博客 多重背包问题 动态规划
  • Solidity开发智能合约

    一个简单的智能合约 在Solidity中 一个合约由一组代码 合约的函数 和数据 合约的状态 组成 合约位于以太坊区块链上的一个特殊地址 uint storedData 这行代码声明了一个状态变量 变量名为storedData 类型为 ui
  • 哈希表 基础理论

    目录 哈希表中的常见概念 哈希函数常见的构建方式 哈希算法 解析哈希冲突的常见方式 hash 哈希 有的也翻译为散列 哈希表通常基于数组实现 元素存取效率高 java中常见的hash集合都是使用哈希表来存储元素 map HashMap Li
  • 数据结构——图的应用

    文章目录 前言 一 图的应用 1 最小生成树 普里姆 Prim 算法 克鲁斯卡尔 Kruskal 算法 2 最短路径 Dijkstra算法求单源最短路径 3 拓扑结构 4 关键路径 总结 前言 图的应用 1 1 最小生成树 1 2 最短路径
  • TypeError: object.__init__() takes no parameters异常报错分析

    class Car def init self make model year self make make self model model self year year self odometer reading 40 def get
  • 试卷批改手写体识别

    分享一个定制化开发的针对老师学生手写识别算法 1 识别学生ABCD手写 老师批改的试卷信息 信息包括 打钩 打叉 圆圈 问号 横线等信息 2 输入图片 本项目为通过手写电子笔采集到老师手写数据 输出为各个识别信息 坐标 信心度 3 提供sd
  • scroller基础知识点

    1 scroller概念 scroller是对滑动操作的一种封装 它记录滑动过程中view应有的偏移量 但不主动作用于view 需要额外的操作将这些偏移量设置给view 从而产生滑动现象 如果不进行这些操作的话是看不到滑动现象的 这个类有点
  • 卷积神经网络经典论文的学习笔记

    1 Optimization algorithm And Regularization 1 1 Optimization algorithm 1 2 Regularization 2 Convolutional Neural Network