dicee.models.real
Classes
Embedding Entities and Relations for Learning and Inference in Knowledge Bases |
|
Translating Embeddings for Modeling |
|
A shallow neural model for relation prediction (https://arxiv.org/abs/2101.09090) |
|
A Physical Embedding Model for Knowledge Graphs |
|
Configuration for the CoKE (Contextualized Knowledge Graph Embedding) model. |
|
Contextualized Knowledge Graph Embedding (CoKE) model. |
Module Contents
- class dicee.models.real.DistMult(args)[source]
Bases:
dicee.models.base_model.BaseKGEEmbedding Entities and Relations for Learning and Inference in Knowledge Bases https://arxiv.org/abs/1412.6575
- name = 'DistMult'
- class dicee.models.real.TransE(args)[source]
Bases:
dicee.models.base_model.BaseKGETranslating Embeddings for Modeling Multi-relational Data https://proceedings.neurips.cc/paper/2013/file/1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf
- name = 'TransE'
- margin = 4
- class dicee.models.real.Shallom(args)[source]
Bases:
dicee.models.base_model.BaseKGEA shallow neural model for relation prediction (https://arxiv.org/abs/2101.09090)
- name = 'Shallom'
- shallom
- class dicee.models.real.Pyke(args)[source]
Bases:
dicee.models.base_model.BaseKGEA Physical Embedding Model for Knowledge Graphs
- name = 'Pyke'
- dist_func
- margin = 1.0
- class dicee.models.real.CoKEConfig[source]
Configuration for the CoKE (Contextualized Knowledge Graph Embedding) model.
- block_size
Sequence length for transformer (3 for triples: head, relation, tail)
- vocab_size
Total vocabulary size (num_entities + num_relations)
- n_layer
Number of transformer layers
- n_head
Number of attention heads per layer
- n_embd
Embedding dimension (set to match model embedding_dim)
- dropout
Dropout rate applied throughout the model
- bias
Whether to use bias in linear layers
- causal
Whether to use causal masking (False for bidirectional attention)
- block_size: int = 3
- vocab_size: int = None
- n_layer: int = 6
- n_head: int = 8
- n_embd: int = None
- dropout: float = 0.3
- bias: bool = True
- causal: bool = False
- class dicee.models.real.CoKE(args, config: CoKEConfig = CoKEConfig())[source]
Bases:
dicee.models.base_model.BaseKGEContextualized Knowledge Graph Embedding (CoKE) model. Based on: https://arxiv.org/pdf/1911.02168.
CoKE uses a transformer encoder to learn contextualized representations of entities and relations. For link prediction, it predicts masked elements in (head, relation, tail) triples using bidirectional attention, similar to BERT’s masked language modeling approach.
The model creates a sequence [head_emb, relation_emb, mask_emb], adds positional embeddings, and processes it through transformer layers to predict the tail entity.
- name = 'CoKE'
- config
- pos_emb
- mask_emb
- blocks
- ln_f
- coke_dropout