dicee.models.complex
Classes
Convolutional ComplEx Knowledge Graph Embeddings |
|
Additive Convolutional ComplEx Knowledge Graph Embeddings |
|
Base class for all Knowledge Graph Embedding models. |
Module Contents
- class dicee.models.complex.ConEx(args)[source]
Bases:
dicee.models.base_model.BaseKGEConvolutional ComplEx Knowledge Graph Embeddings
- name = 'ConEx'
- conv2d
- fc_num_input
- fc1
- norm_fc1
- bn_conv2d
- feature_map_dropout
- residual_convolution(C_1: Tuple[torch.Tensor, torch.Tensor], C_2: Tuple[torch.Tensor, torch.Tensor]) torch.FloatTensor[source]
Compute residual score of two complex-valued embeddings. :param C_1: a tuple of two pytorch tensors that corresponds complex-valued embeddings :param C_2: a tuple of two pytorch tensors that corresponds complex-valued embeddings :return:
- forward_k_vs_all(x: torch.Tensor) torch.FloatTensor[source]
Score a
(head, relation)batch against every entity.Sub-classes must override this method. The default implementation raises
ValueErrorto make missing overrides obvious at runtime.- Returns:
Shape
(batch_size, num_entities)score matrix.- Return type:
torch.FloatTensor
- forward_triples(x: torch.Tensor) torch.FloatTensor[source]
Score a batch of
(head, relation, tail)index triples.- Parameters:
x (torch.LongTensor) – Shape
(batch_size, 3)integer tensor where each row is[head_idx, relation_idx, tail_idx].- Returns:
Shape
(batch_size,)triple scores.- Return type:
torch.FloatTensor
- forward_k_vs_sample(x: torch.Tensor, target_entity_idx: torch.Tensor)[source]
Score a
(head, relation)batch against a sampled subset of entities.Used by
KvsSampleand1vsSampledatasets. Sub-classes that support sample-based labelling must override this method.- Returns:
Shape
(batch_size, k)score matrix where k is the number of sampled target entities.- Return type:
torch.FloatTensor
- class dicee.models.complex.AConEx(args)[source]
Bases:
dicee.models.base_model.BaseKGEAdditive Convolutional ComplEx Knowledge Graph Embeddings
- name = 'AConEx'
- conv2d
- fc_num_input
- fc1
- norm_fc1
- bn_conv2d
- feature_map_dropout
- residual_convolution(C_1: Tuple[torch.Tensor, torch.Tensor], C_2: Tuple[torch.Tensor, torch.Tensor]) torch.FloatTensor[source]
Compute residual score of two complex-valued embeddings. :param C_1: a tuple of two pytorch tensors that corresponds complex-valued embeddings :param C_2: a tuple of two pytorch tensors that corresponds complex-valued embeddings :return:
- forward_k_vs_all(x: torch.Tensor) torch.FloatTensor[source]
Score a
(head, relation)batch against every entity.Sub-classes must override this method. The default implementation raises
ValueErrorto make missing overrides obvious at runtime.- Returns:
Shape
(batch_size, num_entities)score matrix.- Return type:
torch.FloatTensor
- forward_triples(x: torch.Tensor) torch.FloatTensor[source]
Score a batch of
(head, relation, tail)index triples.- Parameters:
x (torch.LongTensor) – Shape
(batch_size, 3)integer tensor where each row is[head_idx, relation_idx, tail_idx].- Returns:
Shape
(batch_size,)triple scores.- Return type:
torch.FloatTensor
- forward_k_vs_sample(x: torch.Tensor, target_entity_idx: torch.Tensor)[source]
Score a
(head, relation)batch against a sampled subset of entities.Used by
KvsSampleand1vsSampledatasets. Sub-classes that support sample-based labelling must override this method.- Returns:
Shape
(batch_size, k)score matrix where k is the number of sampled target entities.- Return type:
torch.FloatTensor
- class dicee.models.complex.ComplEx(args)[source]
Bases:
dicee.models.base_model.BaseKGEBase class for all Knowledge Graph Embedding models.
Inherits the Lightning training loop from
BaseKGELightningand adds the embedding tables, normalisation / dropout layers, and the routing logic that dispatchesforward()calls to the appropriate scoring method.Sub-classes must implement at minimum:
forward_triples()— score a batch of(h, r, t)triples.forward_k_vs_all()— score a(h, r)batch against every entity.
- Parameters:
args (dict) – Flat configuration dictionary produced by
vars(argparse.Namespace). Required keys:embedding_dim,num_entities,num_relations,learning_rate(orlr),optim,scoring_technique.
- name = 'ComplEx'
- static score(head_ent_emb: torch.FloatTensor, rel_ent_emb: torch.FloatTensor, tail_ent_emb: torch.FloatTensor)[source]
- static k_vs_all_score(emb_h: torch.FloatTensor, emb_r: torch.FloatTensor, emb_E: torch.FloatTensor)[source]
- Parameters:
emb_h
emb_r
emb_E
- forward_k_vs_all(x: torch.LongTensor) torch.FloatTensor[source]
Score a
(head, relation)batch against every entity.Sub-classes must override this method. The default implementation raises
ValueErrorto make missing overrides obvious at runtime.- Returns:
Shape
(batch_size, num_entities)score matrix.- Return type:
torch.FloatTensor
- forward_k_vs_sample(x: torch.LongTensor, target_entity_idx: torch.LongTensor)[source]
Score a
(head, relation)batch against a sampled subset of entities.Used by
KvsSampleand1vsSampledatasets. Sub-classes that support sample-based labelling must override this method.- Returns:
Shape
(batch_size, k)score matrix where k is the number of sampled target entities.- Return type:
torch.FloatTensor