dicee.eval_static_funcs

Functions

evaluate_link_prediction_performance(→ Dict)

evaluate_link_prediction_performance_with_reciprocals(...)

evaluate_link_prediction_performance_with_bpe_reciprocals(...)

evaluate_link_prediction_performance_with_bpe(model, ...)

evaluate_lp_bpe_k_vs_all(model, triples[, er_vocab, ...])

evaluate_literal_prediction(kge_model[, ...])

Evaluates the trained literal prediction model on a test file.

Module Contents

Parameters:
  • model

  • triples

  • er_vocab

  • re_vocab

Parameters:
  • model

  • triples

  • within_entities

  • er_vocab

  • re_vocab

dicee.eval_static_funcs.evaluate_lp_bpe_k_vs_all(model, triples: List[List[str]], er_vocab=None, batch_size=None, func_triple_to_bpe_representation: Callable = None, str_to_bpe_entity_to_idx=None)[source]
dicee.eval_static_funcs.evaluate_literal_prediction(kge_model: dicee.knowledge_graph_embeddings.KGE, eval_file_path: str = None, store_lit_preds: bool = True, eval_literals: bool = True, loader_backend: str = 'pandas', return_attr_error_metrics: bool = False)[source]

Evaluates the trained literal prediction model on a test file.

Parameters:
  • eval_file_path (str) – Path to the evaluation file.

  • store_lit_preds (bool) – If True, stores the predictions in a CSV file.

  • eval_literals (bool) – If True, evaluates the literal predictions and prints error metrics.

  • loader_backend (str) – Backend for loading the dataset (‘pandas’ or ‘rdflib’).

Returns:

DataFrame containing error metrics for each attribute if return_attr_error_metrics is True.

Return type:

pd.DataFrame

Raises:
  • RuntimeError – If the kGE model does not have a trained literal model.

  • AssertionError – If the kGE model is not an instance of KGE or if the test set has no valid entities or attributes.