dicee.models.real

Classes

DistMult

Embedding Entities and Relations for Learning and Inference in Knowledge Bases

TransE

Translating Embeddings for Modeling

Shallom

A shallow neural model for relation prediction (https://arxiv.org/abs/2101.09090)

Pyke

A Physical Embedding Model for Knowledge Graphs

CoKEConfig

Configuration for the CoKE (Contextualized Knowledge Graph Embedding) model.

CoKE

Contextualized Knowledge Graph Embedding (CoKE) model.

Module Contents

class dicee.models.real.DistMult(args)[source]

Bases: dicee.models.base_model.BaseKGE

Embedding Entities and Relations for Learning and Inference in Knowledge Bases https://arxiv.org/abs/1412.6575

name = 'DistMult'
k_vs_all_score(emb_h: torch.FloatTensor, emb_r: torch.FloatTensor, emb_E: torch.FloatTensor)[source]
Parameters:
  • emb_h

  • emb_r

  • emb_E

forward_k_vs_all(x: torch.LongTensor)[source]
forward_k_vs_sample(x: torch.LongTensor, target_entity_idx: torch.LongTensor)[source]
score(h, r, t)[source]
class dicee.models.real.TransE(args)[source]

Bases: dicee.models.base_model.BaseKGE

Translating Embeddings for Modeling Multi-relational Data https://proceedings.neurips.cc/paper/2013/file/1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf

name = 'TransE'
margin = 4
score(head_ent_emb, rel_ent_emb, tail_ent_emb)[source]
forward_k_vs_all(x: torch.Tensor) torch.FloatTensor[source]
class dicee.models.real.Shallom(args)[source]

Bases: dicee.models.base_model.BaseKGE

A shallow neural model for relation prediction (https://arxiv.org/abs/2101.09090)

name = 'Shallom'
shallom
get_embeddings() Tuple[numpy.ndarray, None][source]
forward_k_vs_all(x) torch.FloatTensor[source]
forward_triples(x) torch.FloatTensor[source]
Parameters:

x

Returns:

class dicee.models.real.Pyke(args)[source]

Bases: dicee.models.base_model.BaseKGE

A Physical Embedding Model for Knowledge Graphs

name = 'Pyke'
dist_func
margin = 1.0
forward_triples(x: torch.LongTensor)[source]
Parameters:

x

class dicee.models.real.CoKEConfig[source]

Configuration for the CoKE (Contextualized Knowledge Graph Embedding) model.

block_size

Sequence length for transformer (3 for triples: head, relation, tail)

vocab_size

Total vocabulary size (num_entities + num_relations)

n_layer

Number of transformer layers

n_head

Number of attention heads per layer

n_embd

Embedding dimension (set to match model embedding_dim)

dropout

Dropout rate applied throughout the model

bias

Whether to use bias in linear layers

causal

Whether to use causal masking (False for bidirectional attention)

block_size: int = 3
vocab_size: int = None
n_layer: int = 6
n_head: int = 8
n_embd: int = None
dropout: float = 0.3
bias: bool = True
causal: bool = False
class dicee.models.real.CoKE(args, config: CoKEConfig = CoKEConfig())[source]

Bases: dicee.models.base_model.BaseKGE

Contextualized Knowledge Graph Embedding (CoKE) model. Based on: https://arxiv.org/pdf/1911.02168.

CoKE uses a transformer encoder to learn contextualized representations of entities and relations. For link prediction, it predicts masked elements in (head, relation, tail) triples using bidirectional attention, similar to BERT’s masked language modeling approach.

The model creates a sequence [head_emb, relation_emb, mask_emb], adds positional embeddings, and processes it through transformer layers to predict the tail entity.

name = 'CoKE'
config
pos_emb
mask_emb
blocks
ln_f
coke_dropout
forward_k_vs_all(x: torch.Tensor)[source]
score(emb_h, emb_r, emb_t)[source]
forward_k_vs_sample(x: torch.LongTensor, target_entity_idx: torch.LongTensor)[source]