dicee.models.quaternion
Classes
Base class for all neural network modules. |
|
Convolutional Quaternion Knowledge Graph Embeddings |
|
Additive Convolutional Quaternion Knowledge Graph Embeddings |
Functions
|
Module Contents
- class dicee.models.quaternion.QMult(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- name = 'QMult'
- explicit = True
- quaternion_multiplication_followed_by_inner_product(h, r, t)[source]
- Parameters:
h – shape: (*batch_dims, dim) The head representations.
r – shape: (*batch_dims, dim) The head representations.
t – shape: (*batch_dims, dim) The tail representations.
- Returns:
Triple scores.
- static quaternion_normalizer(x: torch.FloatTensor) torch.FloatTensor [source]
Normalize the length of relation vectors, if the forward constraint has not been applied yet.
Absolute value of a quaternion
\[|a + bi + cj + dk| = \sqrt{a^2 + b^2 + c^2 + d^2}\]L2 norm of quaternion vector:
\[\|x\|^2 = \sum_{i=1}^d |x_i|^2 = \sum_{i=1}^d (x_i.re^2 + x_i.im_1^2 + x_i.im_2^2 + x_i.im_3^2)\]- Parameters:
x – The vector.
- Returns:
The normalized vector.
- score(head_ent_emb: torch.FloatTensor, rel_ent_emb: torch.FloatTensor, tail_ent_emb: torch.FloatTensor)[source]
- k_vs_all_score(bpe_head_ent_emb, bpe_rel_ent_emb, E)[source]
- Parameters:
bpe_head_ent_emb
bpe_rel_ent_emb
E
- forward_k_vs_sample(x, target_entity_idx)[source]
Completed. Given a head entity and a relation (h,r), we compute scores for all possible triples,i.e., [score(h,r,x)|x in Entities] => [0.0,0.1,…,0.8], shape=> (1, |Entities|) Given a batch of head entities and relations => shape (size of batch,| Entities|)
- class dicee.models.quaternion.ConvQ(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Convolutional Quaternion Knowledge Graph Embeddings
- name = 'ConvQ'
- entity_embeddings
- relation_embeddings
- conv2d
- fc_num_input
- fc1
- bn_conv1
- bn_conv2
- feature_map_dropout
- forward_k_vs_all(x: torch.Tensor)[source]
Given a head entity and a relation (h,r), we compute scores for all entities. [score(h,r,x)|x in Entities] => [0.0,0.1,…,0.8], shape=> (1, |Entities|) Given a batch of head entities and relations => shape (size of batch,| Entities|)
- class dicee.models.quaternion.AConvQ(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Additive Convolutional Quaternion Knowledge Graph Embeddings
- name = 'AConvQ'
- entity_embeddings
- relation_embeddings
- conv2d
- fc_num_input
- fc1
- bn_conv1
- bn_conv2
- feature_map_dropout
- forward_k_vs_all(x: torch.Tensor)[source]
Given a head entity and a relation (h,r), we compute scores for all entities. [score(h,r,x)|x in Entities] => [0.0,0.1,…,0.8], shape=> (1, |Entities|) Given a batch of head entities and relations => shape (size of batch,| Entities|)