[Paper Review] Structural Deep Network Embedding (SDNE, 2016)

[Paper Review] Structural Deep Network Embedding (SDNE, 2016)

2023, Jul 28    

Outlines


Reference



1. Challenges for Learning Network Embeddings


  • As the underlying structure of graph network is highly complex and un-ordered, there are several challenges for learning embeddings that can capture the network representation.

    • Non-linearity : Due to the highly non-linear structure of graphs, graph embedding function also needs to reflect this non-linearity.

    • Complexity : As the graph consists of both local and global structure, simultaneously preserving these two different properties of the graph with a single embedding approach is quite challenging.

    • Sparsity : Many real-world graph networks have highly sparse connections between nodes, which may lack sufficient information to effectively train the embedding function in a fully supervised manner.

  • To handle these issues, SDNE provides a semi-supervised deep model, which has multiple layers of non-linear functions to map both local and global latent representation of the networks.


2. Framework of SDNE



  • Basically, SDNE is designed to capture two types of proximites in the graph : first-order proximity and second-order proximity

    • First-Order Proximity

      • Represents local pairwise proximity between vertices.

      • Only takes a small protion due to the high sparsity of a graph.

      • Mainly captures local structure of the graph.

      • See if $\large s_{ij}$ > 0 : Existence of a link beteen $\large v_{i}$ and $\large v_{j}$.

    • Second-Order Proximity

      • The proximity of the pair’s neighborhood structure.

      • Intuitively, vertices that share many neighbors tend to be similar.

      • Focuses on the global structure of the graph.

      • Similarity of $\large N_{v_{i}}$ and $\large N_{v_{j}}$.


Table 1. Terms and Notations


2.1. Encoder-Decoder Architecture for Semi-Supervised Learning



  • To capture highly non-linear, complex, and sparse network structure, SDNE has very deep encoder-decoder structure where the encoder extracts latent representation of the networks and the decoder reconstructs the latent space to its original space by reversing the operations of the encoder.


2.1.1. Encoder



  • Consists of $\large K$ layers of non-linear functions where an output of each layer is the hidden representation of input data $\large X$.

    • Given an input $\large x_{i}$, ouput of $\large k_{th}$ layer is denoted as follows

  • Uses adjacency matrix $\large S$ as an input $\large X$ and maps it to the latent representation space $\large Y^{(K)}$, which will be used to calculate first-order proximity between the linked nodes.

  • The first-order proximity, which is a similarity of the latent representation, is regarded as the supervised information as it constrains the target to the paired vertices.

    • $\large s_{ij}$ is 0 if there’s no edge between the vertex $\large v_{i}$ and $\large v_{j}$, which means the objective function for 1st order proximity excludes the unpaired vertices in a supervised manner.

    • Inspired by the laplacian eigenmaps, the first-order loss function $\large L_{1st}$ is re-formulated as $\large Y^{T}\,L\,Y$ where $\large L$ is a laplacian matrix of the given grpah.

  • Laplacian Eigenmaps

    • Eigenvectors corresponding to the smallest non-zero eigenvalues are the low-dimensional embedding vectors that maximize the similarity of the graph signals.

    • Solution vectors for the minimization problem of laplacian quadratic form $\large f^{T}\,L\,f$ with equality constraint for the vector $\large f$ are the eigenvectors of the graph laplacian.

    • Analogous to this, learning embeddings for finding $\large Y$ that minimizes the objective function $\large Y^{T}\,L\,Y$ equals to learning latent mappings that maximizes the similarity between paried vertices.


2.1.2. Decoder



  • Reverses the computation process of the encoder to map latent space $\large Y^{(K)}$ into the reconstruction space $\large \hat{X}$ that has same representation with the input data.

  • Measure reconstruction loss, L2 norm of the difference matrix between input data $\large X$ and $\large \hat{X}$.

  • As the input data $\large X$ is an adjacency matrix of the graph $\large S$ that characterizes the neighborhood structure of the graph, lower reconstruction loss indicates that the decoder successfully learned the reconstruction mappings that captures second-order proximity of global neighborhood structure from the latent space.

  • Further, the authors revises the objective function for second-order proxmity by adding a new factor $\large b_{i}$ to limit the contribution of negative samples (unpaired vertices).

    • Due to the high sparsity of the graph, there’s a severe imbalance between the number of non-zero elements and zero elements in the input data $\large S$, which automatically over-powers the negative samples ($\large s_{ij} \, = \, 0$).

    • Hence, SDNE imposes more penalty to the reconstruction loss of positive samples ($\large s_{ij} \, > \, 0$) by setting $\large b_{i, j} \, = \, 1$ if $\large s_{ij} \, = \, 0$, otherwise $\large b_{i, j} \, = \, \beta \, > \, 1$.

  • This is an unsupervised way of learning embeddings that capture global view of node similarity without explicit ground-truth information.


2.2. Joint Loss Fucntions


  • To get a full loss function for the semi-supervised graph embedding learning, jointly adds the supervised loss for the first-order proximity and unsupervised loss for second-order proximity.

  • Additionally, add a regularization term $\large L_{reg}$ to prevent over-fitting of learnable parameters.


3. Optimization for the Joint Loss


  • Final goal is to optimize the parameters of embedding matrices $\large W$ and $\large \hat{W}$ for all layers $\large k = 1, \, 2, \, …, \, K$ of the encoder and decoder, respectively.

  • To do this, one need to minimize the joint loss with respect to the $\large W$ and $\large \hat{W}$, using stochastic gradient descent by calculating $\large \frac{\partial{L_{mix}}}{\partial{W}}$, $\large \frac{\partial{L_{mix}}}{\partial{\hat{W}}}$.

  • As the encoding step is necessary for computing both first and second order proximity, $\large W$ is updated based on both $\large L_{1st}$ and $\large L_{2nd}$.

  • On the other hand, gradient with respect to $\large \hat{W}$ is only computed for $\large L_{2nd}$ as it’s from decoding step, which is only used to get second-order proximity.




Full Algorithm