图卷积 节点分类_了解图卷积网络以进行节点分类

2023-11-05

图卷积 节点分类

Neural Networks have gained massive success in the last decade. However, early variants of Neural Networks could only be implemented using regular or Euclidean data, while a lot of data in the real world have underlying graph structures which are non-Euclidean. The non-regularity of data structures have led to recent advancements in Graph Neural Networks. In the past few years, different variants of Graph Neural Networks are being developed with Graph Convolutional Networks (GCN) being one of them. GCNs are also considered as one of the basic Graph Neural Networks variants.

在过去的十年中,神经网络取得了巨大的成功。 但是,只能使用常规或欧几里得数据来实现神经网络的早期变体,而现实世界中的许多数据具有非欧几里得的底层图形结构。 数据结构的不规则性导致了图神经网络的最新发展。 在过去的几年中,正在开发图神经网络的各种变体,其中之一就是图卷积网络(GCN)。 GCN也被视为基本的Graph Neural Networks变体之一。

In this article, we’ll dive deeper into Graph Convolutional Networks developed by Thomas Kipf and Max Welling. I will also be giving some very basic examples on building our first graph using NetworkX. By the end of this article, I hope we can gain deeper understanding on the mechanisms inside Graph Convolutional Networks.

在本文中,我们将更深入地研究由Thomas Kipf和Max Welling开发的图卷积网络。 我还将在使用NetworkX构建第一个图形时给出一些非常基本的示例。 到本文结尾,我希望我们对图卷积网络内部的机制有更深入的了解。

If you are not familiar with the basic concepts of Graph Neural Networks, I recommend reading my previous article here.

如果您不熟悉Graph Neural Networks的基本概念,建议您 在此处 阅读我的上一篇文章

图神经网络中的卷积 (Convolution in Graph Neural Networks)

If you are familiar with convolution layers in Convolutional Neural Networks, ‘convolution’ in GCNs is basically the same operation. It refers to multiplying the input neurons with a set of weights that are commonly known as filters or kernels. The filters act as a sliding window across the whole image and enable CNNs to learn features from neighboring cells. Within the same layer, the same filter will be used throughout image, this is referred to as weight sharing. For example, using CNN to classify images of cats vs non-cats, the same filter will be used in the same layer to detect the nose and the ears of the cat.

如果您熟悉卷积神经网络中的卷积层 ,则GCN中的“卷积”基本上是相同的操作。 它是指将输入神经元乘以一组权重(通常称为过滤器内核)。 滤镜充当整个图像的滑动窗口,并使CNN能够从相邻单元中学习特征。 在同一层中 在整个图像中将使用相同的过滤器,这称为权重共享 。 例如,使用CNN对猫和非猫的图像进行分类,将在同一层中使用相同的过滤器来检测猫的鼻子和耳朵。

The same weight (or kernel, or filter in CNNs) is applied throughout the image (image by author)
在整个图像中使用相同的权重(或CNN中的内核或过滤器)(作者提供的图像)

GCNs perform similar operations where the model learns the features by inspecting neighboring nodes. The major difference between CNNs and GNNs is that CNNs are specially built to operate on regular (Euclidean) structured data, while GNNs are the generalized version of CNNs where the numbers of nodes connections vary and the nodes are unordered (irregular on non-Euclidean structured data).

GCN执行类似的操作,其中模型通过检查相邻节点来学习特征。 CNN和GNN之间的主要区别在于,CNN是专门为在常规(欧几里得)结构化数据上运行而构建的,而GNN是CNN的广义版本,其中节点连接的数量变化且节点无序(在非欧几里德结构上是不规则的)数据)。

source 来源图解2D卷积神经网络(左)和图卷积网络(右)

GCNs themselves can be categorized into 2 major algorithms, Spatial Graph Convolutional Networks and Spectral Graph Convolutional Networks. In this article, we will be focusing on Fast Approximation Spectral-based Graph Convolutional Networks.

GCN本身可分为2种主要算法: 空间图卷积网络频谱图卷积网络 。 在本文中,我们将重点介绍基于快速逼近谱的图卷积网络

Before diving into the calculations happening inside GCNs, let’s briefly recap the concept of forward propagation in Neural Networks first. You can skip the following section if you’re familiar with it.

在深入探讨GCN内部发生的计算之前,让我们先简要回顾一下神经网络中的前向传播概念。 如果您熟悉它,可以跳过以下部分。

神经网络正向传播简要回顾 (Neural Networks Forward Propagation Brief Recap)

Illustration of Fully-Connected Neural Networks (image by author)
完全连接的神经网络图(作者提供)

In Neural Networks, in order to propagate the features representation to the next layer (forward pass), we perform the equation below:

在神经网络中,为了将要素表示传播到下一层(前进),我们执行以下方程式:

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

图卷积 节点分类_了解图卷积网络以进行节点分类 的相关文章

随机推荐