Base layers

This module contains a miscellany of layers that are not specifically for graph neural networks.

[source]

InnerProduct

spektral.layers.InnerProduct(trainable_kernel=False, activation=None, kernel_initializer='glorot_uniform', kernel_regularizer=None, kernel_constraint=None)

Computes the inner product between elements of a 2d Tensor:

Mode: single.

Input

  • Tensor of shape (N, M);

Output

  • Tensor of shape (N, N).

Arguments

  • trainable_kernel: add a trainable square matrix between the inner product (e.g., X @ W @ X.T);

  • activation: activation function to use;

  • kernel_initializer: initializer for the weights;

  • kernel_regularizer: regularization applied to the kernel;

  • kernel_constraint: constraint applied to the kernel;


[source]

MinkowskiProduct

spektral.layers.MinkowskiProduct(input_dim_1=None, activation=None)

Computes the hyperbolic inner product between elements of a rank 2 Tensor:

Mode: single.

Input

  • Tensor of shape (N, M);

Output

  • Tensor of shape (N, N).

Arguments

  • input_dim_1: first dimension of the input Tensor; set this if you encounter issues with shapes in your model, in order to provide an explicit output shape for your layer.

  • activation: activation function to use;


[source]

Disjoint2Batch

spektral.layers.Disjoint2Batch()

Utility layer that converts data from disjoint mode to batch mode by zero-padding the node features and adjacency matrices.

Mode: disjoint.

This layer expects a sparse adjacency matrix.

Input

  • Node features of shape (N, F);
  • Binary adjacency matrix of shape (N, N);
  • Graph IDs of shape (N, );

Output

  • Batched node features of shape (batch, N_max, F);
  • Batched adjacency matrix of shape (batch, N_max, N_max);