dicee.evaluation.literal_prediction
Literal prediction evaluation functions.
This module provides functions for evaluating literal/attribute prediction performance of knowledge graph embedding models.
Functions
|
Evaluate trained literal prediction model on a test file. |
Module Contents
- dicee.evaluation.literal_prediction.evaluate_literal_prediction(kge_model, eval_file_path: str = None, store_lit_preds: bool = True, eval_literals: bool = True, loader_backend: str = 'pandas', return_attr_error_metrics: bool = False) pandas.DataFrame | None
Evaluate trained literal prediction model on a test file.
Evaluates the literal prediction capabilities of a KGE model by computing MAE and RMSE metrics for each attribute.
- Parameters:
kge_model – Trained KGE model with literal prediction capability.
eval_file_path – Path to the evaluation file containing test literals.
store_lit_preds – If True, stores predictions to CSV file.
eval_literals – If True, evaluates and prints error metrics.
loader_backend – Backend for loading dataset (‘pandas’ or ‘rdflib’).
return_attr_error_metrics – If True, returns the metrics DataFrame.
- Returns:
DataFrame with per-attribute MAE and RMSE if return_attr_error_metrics is True, otherwise None.
- Raises:
RuntimeError – If the KGE model doesn’t have a trained literal model.
AssertionError – If model is invalid or test set has no valid data.
Example
>>> from dicee import KGE >>> from dicee.evaluation import evaluate_literal_prediction >>> model = KGE(path="pretrained_model") >>> metrics = evaluate_literal_prediction( ... model, ... eval_file_path="test_literals.csv", ... return_attr_error_metrics=True ... ) >>> print(metrics)