DICE Embeddings Logo

Contents:

  • Dicee Manual
  • Installation
  • Download Knowledge Graphs
  • Knowledge Graph Embedding Models
  • How to Train
  • Creating an Embedding Vector Database
  • Answering Complex Queries
  • Predicting Missing Links
  • Downloading Pretrained Models
  • How to Deploy
  • Docker
  • Coverage Report
  • How to cite
  • dicee
    • Submodules
      • dicee.__main__
      • dicee.abstracts
      • dicee.analyse_experiments
      • dicee.callbacks
      • dicee.config
      • dicee.dataset_classes
      • dicee.eval_static_funcs
      • dicee.evaluation
        • Submodules
          • dicee.evaluation.ensemble
          • dicee.evaluation.evaluator
          • dicee.evaluation.link_prediction
            • Functions
            • Module Contents
          • dicee.evaluation.literal_prediction
          • dicee.evaluation.utils
        • Classes
        • Functions
        • Package Contents
      • dicee.evaluator
      • dicee.executer
      • dicee.knowledge_graph
      • dicee.knowledge_graph_embeddings
      • dicee.models
      • dicee.query_generator
      • dicee.read_preprocess_save_load_kg
      • dicee.sanity_checkers
      • dicee.scripts
      • dicee.static_funcs
      • dicee.static_funcs_training
      • dicee.static_preprocess_funcs
      • dicee.trainer
      • dicee.weight_averaging
    • Attributes
    • Classes
    • Package Contents
DICE Embeddings
  • dicee
  • dicee.evaluation
  • dicee.evaluation.link_prediction
  • View page source

dicee.evaluation.link_prediction

Link prediction evaluation functions.

This module provides various functions for evaluating link prediction performance of knowledge graph embedding models.

Functions

evaluate_link_prediction_performance(→ Dict[str, float])

Evaluate link prediction performance with head and tail prediction.

evaluate_link_prediction_performance_with_reciprocals(...)

Evaluate link prediction with reciprocal relations.

evaluate_link_prediction_performance_with_bpe_reciprocals(...)

Evaluate link prediction with BPE encoding and reciprocals.

evaluate_link_prediction_performance_with_bpe(...)

Evaluate link prediction with BPE encoding (head and tail).

evaluate_lp(→ Dict[str, float])

Evaluate link prediction with batched processing.

evaluate_bpe_lp(→ Dict[str, float])

Evaluate link prediction with BPE-encoded entities.

evaluate_lp_bpe_k_vs_all(→ Dict[str, float])

Evaluate BPE link prediction with KvsAll scoring.

Module Contents

dicee.evaluation.link_prediction.evaluate_link_prediction_performance(model, triples, er_vocab: Dict[Tuple, List], re_vocab: Dict[Tuple, List]) → Dict[str, float]

Evaluate link prediction performance with head and tail prediction.

Performs filtered evaluation where known correct answers are filtered out before computing ranks.

Parameters:
  • model – KGE model wrapper with entity/relation mappings.

  • triples – Test triples as list of (head, relation, tail) strings.

  • er_vocab – Mapping (entity, relation) -> list of valid tail entities.

  • re_vocab – Mapping (relation, entity) -> list of valid head entities.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.evaluation.link_prediction.evaluate_link_prediction_performance_with_reciprocals(model, triples, er_vocab: Dict[Tuple, List]) → Dict[str, float]

Evaluate link prediction with reciprocal relations.

Optimized for models trained with reciprocal triples where only tail prediction is needed.

Parameters:
  • model – KGE model wrapper.

  • triples – Test triples as list of (head, relation, tail) strings.

  • er_vocab – Mapping (entity, relation) -> list of valid tail entities.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.evaluation.link_prediction.evaluate_link_prediction_performance_with_bpe_reciprocals(model, within_entities: List[str], triples: List[List[str]], er_vocab: Dict[Tuple, List]) → Dict[str, float]

Evaluate link prediction with BPE encoding and reciprocals.

Parameters:
  • model – KGE model wrapper with BPE support.

  • within_entities – List of entities to evaluate within.

  • triples – Test triples as list of [head, relation, tail] strings.

  • er_vocab – Mapping (entity, relation) -> list of valid tail entities.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.evaluation.link_prediction.evaluate_link_prediction_performance_with_bpe(model, within_entities: List[str], triples: List[Tuple[str]], er_vocab: Dict[Tuple, List], re_vocab: Dict[Tuple, List]) → Dict[str, float]

Evaluate link prediction with BPE encoding (head and tail).

Parameters:
  • model – KGE model wrapper with BPE support.

  • within_entities – List of entities to evaluate within.

  • triples – Test triples as list of (head, relation, tail) tuples.

  • er_vocab – Mapping (entity, relation) -> list of valid tail entities.

  • re_vocab – Mapping (relation, entity) -> list of valid head entities.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.evaluation.link_prediction.evaluate_lp(model, triple_idx, num_entities: int, er_vocab: Dict[Tuple, List], re_vocab: Dict[Tuple, List], info: str = 'Eval Starts', batch_size: int = 128, chunk_size: int = 1000) → Dict[str, float]

Evaluate link prediction with batched processing.

Memory-efficient evaluation using chunked entity scoring.

Parameters:
  • model – The KGE model to evaluate.

  • triple_idx – Integer-indexed triples as numpy array.

  • num_entities – Total number of entities.

  • er_vocab – Mapping (head_idx, rel_idx) -> list of tail indices.

  • re_vocab – Mapping (rel_idx, tail_idx) -> list of head indices.

  • info – Description to print.

  • batch_size – Batch size for triple processing.

  • chunk_size – Chunk size for entity scoring.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.evaluation.link_prediction.evaluate_bpe_lp(model, triple_idx: List[Tuple], all_bpe_shaped_entities, er_vocab: Dict[Tuple, List], re_vocab: Dict[Tuple, List], info: str = 'Eval Starts') → Dict[str, float]

Evaluate link prediction with BPE-encoded entities.

Parameters:
  • model – The KGE model to evaluate.

  • triple_idx – List of BPE-encoded triple tuples.

  • all_bpe_shaped_entities – All entities with BPE representations.

  • er_vocab – Mapping for tail filtering.

  • re_vocab – Mapping for head filtering.

  • info – Description to print.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.evaluation.link_prediction.evaluate_lp_bpe_k_vs_all(model, triples: List[List[str]], er_vocab: Dict = None, batch_size: int = None, func_triple_to_bpe_representation: Callable = None, str_to_bpe_entity_to_idx: Dict = None) → Dict[str, float]

Evaluate BPE link prediction with KvsAll scoring.

Parameters:
  • model – The KGE model wrapper.

  • triples – List of string triples.

  • er_vocab – Entity-relation vocabulary for filtering.

  • batch_size – Batch size for processing.

  • func_triple_to_bpe_representation – Function to convert triples to BPE.

  • str_to_bpe_entity_to_idx – Mapping from string entities to BPE indices.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

Previous Next

© Copyright 2023, Caglar Demir.

Built with Sphinx using a theme provided by Read the Docs.