ontolearn.learners.nero

NERO - Neural Class Expression Learning with Reinforcement.

This module implements NERO, a neural-symbolic concept learner that combines neural networks with symbolic reasoning for OWL class expression learning.

Classes

NERO

NERO - Neural Class Expression Learning with Reinforcement.

Module Contents

class ontolearn.learners.nero.NERO(knowledge_base: KnowledgeBase, namespace=None, num_embedding_dim: int = 50, neural_architecture: str = 'DeepSet', learning_rate: float = 0.001, num_epochs: int = 100, batch_size: int = 32, num_workers: int = 4, quality_func=None, max_runtime: int | None = 10, verbose: int = 0)[source]

NERO - Neural Class Expression Learning with Reinforcement.

NERO combines neural networks with symbolic reasoning for learning OWL class expressions. It uses set-based neural architectures (DeepSet or SetTransformer) to predict quality scores for candidate class expressions.

Parameters:
  • knowledge_base – The knowledge base to learn from

  • num_embedding_dim – Dimensionality of entity embeddings (default: 50)

  • neural_architecture – Neural architecture to use (‘DeepSet’ or ‘SetTransformer’, default: ‘DeepSet’)

  • learning_rate – Learning rate for training (default: 0.001)

  • num_epochs – Number of training epochs (default: 100)

  • batch_size – Batch size for training (default: 32)

  • num_workers – Number of workers for data loading (default: 4)

  • quality_func – Quality function for evaluating expressions (default: F1-score)

  • max_runtime – Maximum runtime in seconds (default: None)

  • verbose – Verbosity level (default: 0)

name = 'NERO'
kb
ns = None
num_embedding_dim = 50
neural_architecture = 'DeepSet'
learning_rate = 0.001
num_epochs = 100
batch_size = 32
num_workers = 4
max_runtime = 10
verbose = 0
search_tree
refinement_op = None
device
model = None
instance_idx_mapping = None
idx_to_instance_mapping = None
target_class_expressions = None
expression
train(learning_problems: List[Tuple[List[str], List[str]]])[source]

Train the NERO model on learning problems.

Parameters:

learning_problems – List of (positive_examples, negative_examples) tuples

search(pos: Set[str], neg: Set[str], top_k: int = 10, max_child_length: int = 10, max_queue_size: int = 10000) Dict[source]

Perform reinforcement learning-based search for complex class expressions. Uses neural predictions to initialize and guide the search.

search_with_smart_init(pos: Set[str], neg: Set[str], top_k: int = 10) Dict[source]

Search with smart initialization from neural predictions (model.py compatible). This uses neural model predictions to guide the symbolic refinement search.

fit(learning_problem: PosNegLPStandard, max_runtime: int | None = None)[source]

Fit the model to a learning problem (Ontolearn-compatible interface). This now includes training the neural model and performing the search.

best_hypothesis() str | None[source]

Return the best hypothesis (Ontolearn-compatible interface).

Returns:

The best predicted class expression as a string

best_hypothesis_quality() float[source]

Return the quality of the best hypothesis.

Returns:

The F-measure/quality of the best prediction

forward(xpos: torch.Tensor, xneg: torch.Tensor) torch.Tensor[source]

Forward pass through the neural model.

Parameters:
  • xpos – Tensor of positive example indices

  • xneg – Tensor of negative example indices

Returns:

Predictions for target class expressions

positive_expression_embeddings(individuals: List[str]) torch.Tensor[source]

Get embeddings for positive individuals.

Parameters:

individuals – List of individual URIs

Returns:

Tensor of embeddings

negative_expression_embeddings(individuals: List[str]) torch.Tensor[source]

Get embeddings for negative individuals.

Parameters:

individuals – List of individual URIs

Returns:

Tensor of embeddings

downward_refine(expression, max_length: int | None = None) Set[source]

Top-down/downward refinement operator from original NERO.

This implements the refinement logic from model.py: ∀s ∈ StateSpace : ρ(s) ⊆ {s^i ∈ StateSpace | s^i ⊑ s}

Parameters:
  • expression – Expression to refine

  • max_length – Maximum length constraint for refinements

Returns:

Set of refined expressions

upward_refine(expression) Set[source]

Bottom-up/upward refinement operator from original NERO.

This implements the generalization logic: ∀s ∈ StateSpace : ρ(s) ⊆ {s^i ∈ StateSpace | s ⊑ s^i}

Parameters:

expression – Expression to generalize

Returns:

Set of generalized expressions

search_with_init(top_prediction_queue: SearchTree, set_pos: Set[str], set_neg: Set[str]) SearchTree[source]

Standard search with smart initialization (from original model.py).

This is the key search method that combines neural predictions with symbolic refinement.

Parameters:
  • top_prediction_queue – Priority queue initialized with neural predictions

  • set_pos – Set of positive examples

  • set_neg – Set of negative examples

Returns:

SearchTree with explored and refined expressions

fit_from_iterable(pos: List[str], neg: List[str], top_k: int = 10, use_search: str = 'SmartInit') Dict[source]

Fit method compatible with original NERO’s model.py interface.

This implements the complete prediction pipeline from the original NERO: 1. Neural prediction to get top-k candidates 2. Quality evaluation 3. Optional symbolic search for refinement

Parameters:
  • pos – List of positive example URIs

  • neg – List of negative example URIs

  • top_k – Number of top neural predictions to consider

  • use_search – Search strategy (‘SmartInit’, ‘None’, or None)

Returns:

Dictionary with prediction results

predict(pos: Set[owlapy.owl_individual.OWLNamedIndividual], neg: Set[owlapy.owl_individual.OWLNamedIndividual], top_k: int = 10) Dict[source]

Predict class expressions for given positive and negative examples. This now uses the search mechanism.

__str__()[source]
__repr__()[source]