dicee.static_funcs_training

Training-related static functions.

This module provides backward compatibility by re-exporting evaluation functions from the new dicee.evaluation module, along with training utilities.

Deprecated since version Evaluation: functions have moved to dicee.evaluation. Use that module for new code. This module will continue to export training utilities.

Functions

evaluate_lp(→ Dict[str, float])

Evaluate link prediction with batched processing.

evaluate_bpe_lp(→ Dict[str, float])

Evaluate link prediction with BPE-encoded entities.

make_iterable_verbose(→ Iterable)

Wrap an iterable with tqdm progress bar if verbose is True.

efficient_zero_grad(→ None)

Efficiently zero gradients using parameter.grad = None.

Module Contents

dicee.static_funcs_training.evaluate_lp(model, triple_idx, num_entities: int, er_vocab: Dict[Tuple, List], re_vocab: Dict[Tuple, List], info: str = 'Eval Starts', batch_size: int = 128, chunk_size: int = 1000) Dict[str, float]

Evaluate link prediction with batched processing.

Memory-efficient evaluation using chunked entity scoring.

Parameters:
  • model – The KGE model to evaluate.

  • triple_idx – Integer-indexed triples as numpy array.

  • num_entities – Total number of entities.

  • er_vocab – Mapping (head_idx, rel_idx) -> list of tail indices.

  • re_vocab – Mapping (rel_idx, tail_idx) -> list of head indices.

  • info – Description to print.

  • batch_size – Batch size for triple processing.

  • chunk_size – Chunk size for entity scoring.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.static_funcs_training.evaluate_bpe_lp(model, triple_idx: List[Tuple], all_bpe_shaped_entities, er_vocab: Dict[Tuple, List], re_vocab: Dict[Tuple, List], info: str = 'Eval Starts') Dict[str, float]

Evaluate link prediction with BPE-encoded entities.

Parameters:
  • model – The KGE model to evaluate.

  • triple_idx – List of BPE-encoded triple tuples.

  • all_bpe_shaped_entities – All entities with BPE representations.

  • er_vocab – Mapping for tail filtering.

  • re_vocab – Mapping for head filtering.

  • info – Description to print.

Returns:

Dictionary with H@1, H@3, H@10, and MRR metrics.

dicee.static_funcs_training.make_iterable_verbose(iterable_object: Iterable, verbose: bool, desc: str = 'Default', position: int = None, leave: bool = True) Iterable

Wrap an iterable with tqdm progress bar if verbose is True.

Parameters:
  • iterable_object – The iterable to potentially wrap.

  • verbose – Whether to show progress bar.

  • desc – Description for the progress bar.

  • position – Position of the progress bar.

  • leave – Whether to leave the progress bar after completion.

Returns:

The original iterable or a tqdm-wrapped version.

dicee.static_funcs_training.efficient_zero_grad(model) None

Efficiently zero gradients using parameter.grad = None.

This is more efficient than optimizer.zero_grad() as it avoids memory operations.

See: https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html

Parameters:

model – PyTorch model to zero gradients for.