Base layers
This module contains a miscellany of layers that are not specifically for graph neural networks.
InnerProduct
spektral.layers.InnerProduct(trainable_kernel=False, activation=None, kernel_initializer='glorot_uniform', kernel_regularizer=None, kernel_constraint=None)
Computes the inner product between elements of a 2d Tensor:
Mode: single.
Input
 Tensor of shape
(n_nodes, n_features)
;
Output
 Tensor of shape
(n_nodes, n_nodes)
.
Arguments

trainable_kernel
: add a trainable square matrix between the inner product (e.g.,X @ W @ X.T
); 
activation
: activation function; 
kernel_initializer
: initializer for the weights; 
kernel_regularizer
: regularization applied to the kernel; 
kernel_constraint
: constraint applied to the kernel;
Disjoint2Batch
spektral.layers.Disjoint2Batch()
Utility layer that converts data from disjoint mode to batch mode by zeropadding the node features and adjacency matrices.
Mode: disjoint.
This layer expects a sparse adjacency matrix.
Input
 Node features of shape
(n_nodes, n_node_features)
;  Binary adjacency matrix of shape
(n_nodes, n_nodes)
;  Graph IDs of shape
(n_nodes, )
;
Output
 Batched node features of shape
(batch, N_max, n_node_features)
;  Batched adjacency matrix of shape
(batch, N_max, N_max)
;
MinkowskiProduct
spektral.layers.MinkowskiProduct(activation=None)
Computes the hyperbolic inner product between elements of a rank 2 Tensor:
Mode: single.
Input
 Tensor of shape
(n_nodes, n_features)
;
Output
 Tensor of shape
(n_nodes, n_nodes)
.
Arguments
activation
: activation function;