ontolearn.learners
This module provides various concept learning algorithms for ontology engineering and OWL class expression learning.
Available Learners:
Refinement-Based Learners: - CELOE: A refinement-operator based learner (originating from DL-Learner).
It performs heuristic-guided search over class expression refinements to find compact OWL class expressions that fit positive/negative examples. Suitable when symbolic search with ontological reasoning is required.
OCEL: A lightweight / constrained variant of CELOE. It uses a smaller set of refinements or simplified search heuristics to trade expressivity for speed and lower computational cost.
SAT-Based Learners: - ALCSAT: A SAT-based learner that encodes the ALC concept learning problem
as a SAT problem. It uses incremental SAT solving to find concepts of increasing size that maximize accuracy on positive/negative examples. Particularly effective for finding compact, exact solutions.
SPELL: A SAT-based learner using the general SPELL fitting framework. Supports different search modes (exact, neg_approx, full_approx) and can find separating queries of bounded size using SAT encoding.
Neural / Hybrid Learners: - Drill: A neuro-symbolic learner that combines neural scoring or guidance
with symbolic refinement/search. Typically, uses learned models to rank candidates while keeping final outputs in an interpretable DL form.
CLIP: A hybrid approach that leverages pretrained embeddings to assist candidate generation or scoring (e.g., using semantic similarity signals). Useful when distributional signals complement logical reasoning.
NCES, NCES2: Neural concept-expression search variants. These rely on neural encoders or learned scorers to propose and rank candidate class expressions; NCES2 represents an improved/iterated version.
NERO: A neural embedding model that learns permutation-invariant embeddings for sets of examples tailored towards predicting F1 scores of pre-selected description logic concepts.
ROCES: A hybrid/refinement-based approach that combines ranking, coverage estimation, and refinement operators to discover candidate expressions efficiently. Extension of NCES2.
-Evolutionary: - EvoLearner: Evolutionary search-based learner that evolves candidate
descriptions (e.g., via genetic operators) using fitness functions derived from coverage and other objectives.
Query-Based Learners: - SPARQLQueryLearner: Learns query patterns expressed as SPARQL queries
that capture the target concept. Useful when working directly with SPARQL endpoints or large RDF datasets where query-based retrieval is preferable to reasoning-heavy symbolic search.
Tree / Rule-Based Learners: - TDL: Tree-based Description Logic Learner. Adapts decision-tree style
induction to construct DL class expressions from attribute-like splits or tests, producing interpretable, rule-like descriptions.
Example
>>> from ontolearn.learners import CELOE, Drill
>>> from ontolearn.knowledge_base import KnowledgeBase
>>>
>>> kb = KnowledgeBase(path="example.owl")
>>> model = CELOE(knowledge_base=kb)
>>> model.fit(pos_examples, neg_examples)
Submodules
- ontolearn.learners.alcsat
- ontolearn.learners.base
- ontolearn.learners.celoe
- ontolearn.learners.clip
- ontolearn.learners.drill
- ontolearn.learners.evolearner
- ontolearn.learners.nces
- ontolearn.learners.nces2
- ontolearn.learners.nero
- ontolearn.learners.ocel
- ontolearn.learners.roces
- ontolearn.learners.sat_base
- ontolearn.learners.sparql_query_learner
- ontolearn.learners.spell
- ontolearn.learners.spell_kit
- ontolearn.learners.tree_learner
Classes
@TODO: CD: Why should this class inherit from AbstractConceptNode ? |
|
Base class for refinement based Concept Learning approaches. |
|
ALCSAT: SAT-based ALC concept learner. |
|
Class Expression Learning for Ontology Engineering. |
|
Concept Learner with Integrated Length Prediction. |
|
Neuro-Symbolic Class Expression Learning (https://www.ijcai.org/proceedings/2023/0403.pdf) |
|
An evolutionary approach to learn concepts in ALCQ(D). |
|
Neural Class Expression Synthesis. |
|
Neural Class Expression Synthesis in ALCHIQ(D). |
|
NERO - Neural Class Expression Learning with Reinforcement. |
|
A limited version of CELOE. |
|
Robust Class Expression Synthesis in Description Logics via Iterative Sampling. |
|
Learning SPARQL queries: Given a description logic concept (potentially generated by a concept learner), |
|
SPELL: SAT-based concept learner using general SPELL fitting. |
|
Tree-based Description Logic Concept Learner |
Package Contents
- class ontolearn.learners.BaseConceptLearner(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, quality_func: AbstractScorer | None = None, max_num_of_concepts_tested: int | None = None, max_runtime: int | None = None, terminate_on_goal: bool | None = None)[source]
@TODO: CD: Why should this class inherit from AbstractConceptNode ? @TODO: CD: This class should be redefined. An owl class expression learner does not need to be a search based model.
Base class for Concept Learning approaches.
- Learning problem definition, Let
K = (TBOX, ABOX) be a knowledge base.
ALCConcepts be a set of all ALC concepts.
hypotheses be a set of ALC concepts : hypotheses subseteq ALCConcepts.
K_N be a set of all instances.
K_C be a set of concepts defined in TBOX: K_C subseteq ALCConcepts
K_R be a set of properties/relations.
- E^+, E^- be a set of positive and negative instances and the followings hold
** E^+ cup E^- subseteq K_N ** E^+ cap E^- = emptyset
- The goal is to learn a set of concepts $hypotheses subseteq ALCConcepts$ such that
∀ H in hypotheses: { (K wedge H models E^+) wedge neg( K wedge H models E^-) }.
- kb
The knowledge base that the concept learner is using.
- Type:
- quality_func
- Type:
- max_num_of_concepts_tested
- Type:
int
- terminate_on_goal
Whether to stop the algorithm if a perfect solution is found.
- Type:
bool
- max_runtime
Limit to stop the algorithm after n seconds.
- Type:
int
- _number_of_tested_concepts
Yes, you got it. This stores the number of tested concepts.
- Type:
int
- reasoner
The reasoner that this model is using.
- Type:
AbstractOWLReasoner
- start_time
The time when
fit()starts the execution. Used to calculate the total timefit()takes to execute.- Type:
float
- __slots__ = ('kb', 'reasoner', 'quality_func', 'max_num_of_concepts_tested', 'terminate_on_goal',...
- name: ClassVar[str]
- quality_func: AbstractScorer | None
- max_num_of_concepts_tested: int | None
- terminate_on_goal: bool | None
- max_runtime: int | None
- start_time: float | None
- reasoner = None
- terminate()[source]
This method is called when the search algorithm terminates.
If INFO log level is enabled, it prints out some statistics like runtime and concept tests to the logger.
- Returns:
The concept learner object itself.
- construct_learning_problem(type_: Type[_X], xargs: Tuple, xkwargs: Dict) _X[source]
Construct learning problem of given type based on args and kwargs. If a learning problem is contained in args or the learning_problem kwarg, it is used. otherwise, a new learning problem of type type_ is created with args and kwargs as parameters.
- Parameters:
type – Type of the learning problem.
xargs – The positional arguments.
xkwargs – The keyword arguments.
- Returns:
The learning problem.
- abstractmethod fit(*args, **kwargs)[source]
Run the concept learning algorithm according to its configuration.
Once finished, the results can be queried with the best_hypotheses function.
- abstractmethod best_hypotheses(n=10) Iterable[owlapy.class_expression.OWLClassExpression][source]
Get the current best found hypotheses according to the quality.
- Parameters:
n – Maximum number of results.
- Returns:
Iterable with hypotheses in form of search tree nodes.
- predict(individuals: List[owlapy.owl_individual.OWLNamedIndividual], hypotheses: owlapy.class_expression.OWLClassExpression | List[_N | owlapy.class_expression.OWLClassExpression] | None = None, axioms: List[owlapy.owl_axiom.OWLAxiom] | None = None, n: int = 10) pandas.DataFrame[source]
@TODO: CD: Predicting an individual can be done by a retrieval function not a concept learner @TODO: A concept learner learns an owl class expression. @TODO: This learned expression can be used as a binary predictor.
Creates a binary data frame showing for each individual whether it is entailed in the given hypotheses (class expressions). The individuals do not have to be in the ontology/knowledge base yet. In that case, axioms describing these individuals must be provided.
The state of the knowledge base/ontology is not changed, any provided axioms will be removed again.
- Parameters:
individuals – A list of individuals/instances.
hypotheses – (Optional) A list of search tree nodes or class expressions. If not provided, the current
BaseConceptLearner.best_hypothesis()of the concept learner are used.axioms – (Optional) A list of axioms that are not in the current knowledge base/ontology. If the individual list contains individuals that are not in the ontology yet, axioms describing these individuals must be provided. The argument can also be used to add arbitrary axioms to the ontology for the prediction.
n – Integer denoting number of ALC concepts to extract from search tree if hypotheses=None.
- Returns:
Pandas data frame with dimensions |individuals|*|hypotheses| indicating for each individual and each hypothesis whether the individual is entailed in the hypothesis.
- property number_of_tested_concepts
- save_best_hypothesis(n: int = 10, path: str = './Predictions', rdf_format: str = 'rdfxml') None[source]
Serialise the best hypotheses to a file. @TODO: CD: This function should be deprecated. @TODO: CD: Saving owl class expressions into disk should be disentangled from a concept earner @TODO:CD: owlapy 1.3.3, we will use save_owl_class_expressions :param n: Maximum number of hypotheses to save. :param path: Filename base (extension will be added automatically). :param rdf_format: Serialisation format. currently supported: “rdfxml”.
- load_hypotheses(path: str) Iterable[owlapy.class_expression.OWLClassExpression][source]
@TODO: CD: This function should be deprecated. @TODO: CD: Loading owl class expressions from disk should be disentangled from a concept earner
Loads hypotheses (class expressions) from a file saved by
BaseConceptLearner.save_best_hypothesis().- Parameters:
path – Path to the file containing hypotheses.
- class ontolearn.learners.RefinementBasedConceptLearner(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, refinement_operator: BaseRefinement | None = None, heuristic_func: AbstractHeuristic | None = None, quality_func: AbstractScorer | None = None, max_num_of_concepts_tested: int | None = None, max_runtime: int | None = None, terminate_on_goal: bool | None = None, iter_bound: int | None = None, max_child_length: int | None = None, root_concept: owlapy.class_expression.OWLClassExpression | None = None)[source]
Bases:
BaseConceptLearnerBase class for refinement based Concept Learning approaches.
- kb
The knowledge base that the concept learner is using.
- Type:
- quality_func
- Type:
- max_num_of_concepts_tested
- Type:
int
- terminate_on_goal
Whether to stop the algorithm if a perfect solution is found.
- Type:
bool
- max_runtime
Limit to stop the algorithm after n seconds.
- Type:
int
- _number_of_tested_concepts
Yes, you got it. This stores the number of tested concepts.
- Type:
int
- reasoner
The reasoner that this model is using.
- Type:
AbstractOWLReasoner
- start_time
The time when
fit()starts the execution. Used to calculate the total timefit()takes to execute.- Type:
float
- iter_bound
Limit to stop the algorithm after n refinement steps are done.
- Type:
int
- heuristic_func
Function to guide the search heuristic.
- Type:
- operator
Operator used to generate refinements.
- Type:
- start_class
The starting class expression for the refinement operation.
- Type:
OWLClassExpression
- max_child_length
Limit the length of concepts generated by the refinement operator.
- Type:
int
- __slots__ = ('operator', 'heuristic_func', 'max_child_length', 'start_class', 'iter_bound')
- operator: BaseRefinement | None
- heuristic_func: AbstractHeuristic | None
- max_child_length: int | None
- start_class: owlapy.class_expression.OWLClassExpression | None
- iter_bound: int | None
- terminate()[source]
This method is called when the search algorithm terminates.
If INFO log level is enabled, it prints out some statistics like runtime and concept tests to the logger.
- Returns:
The concept learner object itself.
- abstractmethod next_node_to_expand(*args, **kwargs)[source]
Return from the search tree the most promising search tree node to use for the next refinement step.
- Returns:
Next search tree node to refine.
- Return type:
_N
- abstractmethod downward_refinement(*args, **kwargs)[source]
Execute one refinement step of a refinement based learning algorithm.
- Parameters:
node (_N) – the search tree node on which to refine.
- Returns:
Refinement results as new search tree nodes (they still need to be added to the tree).
- Return type:
Iterable[_N]
- abstractmethod show_search_tree(heading_step: str, top_n: int = 10) None[source]
A debugging function to print out the current search tree and the current n best found hypotheses to standard output.
- Parameters:
heading_step – A message to display at the beginning of the output.
top_n – The number of current best hypotheses to print out.
- class ontolearn.learners.ALCSAT(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, max_runtime: int | None = 60, max_concept_size: int = 10, start_concept_size: int = 1, operators: Set | None = None, tree_templates: bool = True, type_encoding: bool = True)[source]
Bases:
ontolearn.learners.sat_base.SATBaseLearnerALCSAT: SAT-based ALC concept learner.
This learner uses SAT solvers to find ALC concept expressions that fit positive and negative examples. It encodes the concept learning problem as a SAT problem and uses a Glucose SAT solver to find solutions.
The algorithm incrementally searches for concepts of increasing size (tree depth k) that maximize the accuracy on the given examples.
- kb
The knowledge base that the concept learner is using.
- Type:
- max_concept_size
Maximum size (depth) of concepts to search for.
- Type:
int
- start_concept_size
Starting size for incremental search.
- Type:
int
- operators
Set of ALC operators to use (NEG, AND, OR, EX, ALL).
- Type:
Set
- tree_templates
Whether to use tree templates for symmetry breaking.
- Type:
bool
- type_encoding
Whether to use type encoding optimization.
- Type:
bool
- timeout
Timeout in seconds for the SAT solver (-1 for no timeout).
- Type:
float
- _best_hypothesis
Best found hypothesis.
- Type:
OWLClassExpression
- _best_hypothesis_accuracy
Accuracy of the best hypothesis.
- Type:
float
- _ind_to_owl
Mapping from internal individual indices to OWL individuals.
- Type:
dict
- _owl_to_ind
Mapping from OWL individuals to internal indices.
- Type:
dict
- __slots__ = ('max_concept_size', 'start_concept_size', 'operators', 'tree_templates', 'type_encoding')
- name = 'alcsat'
- max_concept_size = 10
- start_concept_size = 1
- operators = None
- tree_templates = True
- type_encoding = True
- fit(lp: PosNegLPStandard)[source]
Find ALC concept expressions that explain positive and negative examples.
- Parameters:
lp – Learning problem with positive and negative examples.
- Returns:
self
- class ontolearn.learners.CELOE(knowledge_base: AbstractKnowledgeBase = None, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, refinement_operator: BaseRefinement[OENode] | None = None, quality_func: AbstractScorer | None = None, heuristic_func: AbstractHeuristic | None = None, terminate_on_goal: bool | None = None, iter_bound: int | None = None, max_num_of_concepts_tested: int | None = None, max_runtime: int | None = None, max_results: int = 10, best_only: bool = False, calculate_min_max: bool = True)[source]
Bases:
ontolearn.learners.base.RefinementBasedConceptLearnerClass Expression Learning for Ontology Engineering. .. attribute:: best_descriptions
Best hypotheses ordered.
- type:
EvaluatedDescriptionSet[OENode, QualityOrderedNode]
- best_only
If False pick only nodes with quality < 1.0, else pick without quality restrictions.
- Type:
bool
- calculate_min_max
Calculate minimum and maximum horizontal expansion? Statistical purpose only.
- Type:
bool
- heuristic_func
Function to guide the search heuristic.
- Type:
- iter_bound
Limit to stop the algorithm after n refinement steps are done.
- Type:
int
- kb
The knowledge base that the concept learner is using.
- Type:
- max_child_length
Limit the length of concepts generated by the refinement operator.
- Type:
int
- max_he
Maximal value of horizontal expansion.
- Type:
int
- max_num_of_concepts_tested
- Type:
int
- max_runtime
Limit to stop the algorithm after n seconds.
- Type:
int
- min_he
Minimal value of horizontal expansion.
- Type:
int
- name
Name of the model = ‘celoe_python’.
- Type:
str
- _number_of_tested_concepts
Yes, you got it. This stores the number of tested concepts.
- Type:
int
- operator
Operator used to generate refinements.
- Type:
- quality_func
- Type:
- reasoner
The reasoner that this model is using.
- Type:
AbstractOWLReasoner
- search_tree
Dict to store the TreeNode for a class expression.
- start_class
The starting class expression for the refinement operation.
- Type:
OWLClassExpression
- start_time
The time when
fit()starts the execution. Used to calculate the total timefit()takes to execute.- Type:
float
- terminate_on_goal
Whether to stop the algorithm if a perfect solution is found.
- Type:
bool
- __slots__ = ('best_descriptions', 'max_he', 'min_he', 'best_only', 'calculate_min_max', 'heuristic_queue',...
- name = 'celoe_python'
- heuristic_queue
- best_descriptions
- best_only = False
- calculate_min_max = True
- max_he = 0
- min_he = 1
- next_node_to_expand(step: int) OENode[source]
Return from the search tree the most promising search tree node to use for the next refinement step.
- Returns:
Next search tree node to refine.
- Return type:
_N
- best_hypotheses(n: int = 1, return_node: bool = False) owlapy.class_expression.OWLClassExpression | Iterable[owlapy.class_expression.OWLClassExpression] | OENode | Iterable[OENode][source]
Get the current best found hypotheses according to the quality.
- Parameters:
n – Maximum number of results.
- Returns:
Iterable with hypotheses in form of search tree nodes.
- make_node(c: owlapy.class_expression.OWLClassExpression, parent_node: OENode | None = None, is_root: bool = False) OENode[source]
- updating_node(node: OENode)[source]
Removes the node from the heuristic sorted set and inserts it again.
- Parameters:
update. (Node to)
- Yields:
The node itself.
- downward_refinement(node: OENode) Iterable[OENode][source]
Execute one refinement step of a refinement based learning algorithm.
- Parameters:
node (_N) – the search tree node on which to refine.
- Returns:
Refinement results as new search tree nodes (they still need to be added to the tree).
- Return type:
Iterable[_N]
- encoded_learning_problem() EncodedPosNegLPStandardKind | None[source]
Fetch the most recently used learning problem from the fit method.
- class ontolearn.learners.CLIP(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, refinement_operator: BaseRefinement[OENode] | None = ExpressRefinement, quality_func: AbstractScorer | None = None, heuristic_func: AbstractHeuristic | None = None, terminate_on_goal: bool | None = None, iter_bound: int | None = None, max_num_of_concepts_tested: int | None = None, max_runtime: int | None = None, max_results: int = 10, best_only: bool = False, calculate_min_max: bool = True, path_of_embeddings='', predictor_name=None, pretrained_predictor_name=['SetTransformer', 'LSTM', 'GRU', 'CNN'], load_pretrained=False, num_workers=4, num_examples=1000, output_size=15)[source]
Bases:
ontolearn.learners.CELOEConcept Learner with Integrated Length Prediction. This algorithm extends the CELOE algorithm by using concept length predictors and a different refinement operator, i.e., ExpressRefinement
- best_descriptions
Best hypotheses ordered.
- Type:
EvaluatedDescriptionSet[OENode, QualityOrderedNode]
- best_only
If False pick only nodes with quality < 1.0, else pick without quality restrictions.
- Type:
bool
- calculate_min_max
Calculate minimum and maximum horizontal expansion? Statistical purpose only.
- Type:
bool
- heuristic_func
Function to guide the search heuristic.
- Type:
- iter_bound
Limit to stop the algorithm after n refinement steps are done.
- Type:
int
- kb
The knowledge base that the concept learner is using.
- Type:
- max_child_length
Limit the length of concepts generated by the refinement operator.
- Type:
int
- max_he
Maximal value of horizontal expansion.
- Type:
int
- max_num_of_concepts_tested
- Type:
int
- max_runtime
Limit to stop the algorithm after n seconds.
- Type:
int
- min_he
Minimal value of horizontal expansion.
- Type:
int
- name
Name of the model = ‘celoe_python’.
- Type:
str
- _number_of_tested_concepts
Yes, you got it. This stores the number of tested concepts.
- Type:
int
- operator
Operator used to generate refinements.
- Type:
- quality_func
- Type:
- reasoner
The reasoner that this model is using.
- Type:
AbstractOWLReasoner
- search_tree
Dict to store the TreeNode for a class expression.
- start_class
The starting class expression for the refinement operation.
- Type:
OWLClassExpression
- start_time
The time when
fit()starts the execution. Used to calculate the total timefit()takes to execute.- Type:
float
- terminate_on_goal
Whether to stop the algorithm if a perfect solution is found.
- Type:
bool
- __slots__ = ('best_descriptions', 'max_he', 'min_he', 'best_only', 'calculate_min_max', 'heuristic_queue',...
- name = 'CLIP'
- predictor_name = None
- pretrained_predictor_name = ['SetTransformer', 'LSTM', 'GRU', 'CNN']
- knowledge_base
- load_pretrained = False
- num_workers = 4
- output_size = 15
- num_examples = 1000
- path_of_embeddings = ''
- device
- length_predictor
- class ontolearn.learners.Drill(knowledge_base: AbstractKnowledgeBase, path_embeddings: str = None, refinement_operator: LengthBasedRefinement = None, use_inverse: bool = True, use_data_properties: bool = True, use_card_restrictions: bool = True, use_nominals: bool = True, min_cardinality_restriction: int = 2, max_cardinality_restriction: int = 5, positive_type_bias: int = 1, quality_func: Callable = None, reward_func: object = None, batch_size=None, num_workers: int = 1, iter_bound=None, max_num_of_concepts_tested=None, verbose: int = 0, terminate_on_goal=None, max_len_replay_memory=256, epsilon_decay: float = 0.01, epsilon_min: float = 0.0, num_epochs_per_replay: int = 2, num_episodes_per_replay: int = 2, learning_rate: float = 0.001, max_runtime=None, num_of_sequential_actions=3, stop_at_goal=True, num_episode: int = 10)[source]
Bases:
ontolearn.learners.base.RefinementBasedConceptLearnerNeuro-Symbolic Class Expression Learning (https://www.ijcai.org/proceedings/2023/0403.pdf)
- name = 'DRILL'
- verbose = 0
- learning_problem = None
- device
- num_workers = 1
- learning_rate = 0.001
- num_episode = 10
- num_of_sequential_actions = 3
- num_epochs_per_replay = 2
- max_len_replay_memory = 256
- epsilon_decay = 0.01
- epsilon_min = 0.0
- batch_size = None
- num_episodes_per_replay = 2
- seen_examples
- pos: FrozenSet[owlapy.owl_individual.OWLNamedIndividual] = None
- neg: FrozenSet[owlapy.owl_individual.OWLNamedIndividual] = None
- positive_type_bias = 1
- start_time = None
- goal_found = False
- search_tree
- stop_at_goal = True
- epsilon = 1
- initialize_training_class_expression_learning_problem(pos: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) RL_State[source]
Initialize
- rl_learning_loop(pos_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) List[float][source]
Reinforcement Learning Training Loop
Initialize RL environment for a given learning problem (E^+ pos_iri and E^- neg_iri )
- Training:
2.1 Obtain a trajectory: A sequence of RL states/DL concepts T, Person, (Female and
- orall hasSibling Female).
Rewards at each transition are also computed
- train(dataset: Iterable[Tuple[str, Set, Set]] | None = None, num_of_target_concepts: int = 1, num_learning_problems: int = 1)[source]
Training RL agent (1) Generate Learning Problems (2) For each learning problem, perform the RL loop
- fit(learning_problem: PosNegLPStandard, max_runtime=None)[source]
Run the concept learning algorithm according to its configuration.
Once finished, the results can be queried with the best_hypotheses function.
- init_embeddings_of_examples(pos_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual])[source]
- create_rl_state(c: owlapy.class_expression.OWLClassExpression, parent_node: RL_State | None = None, is_root: bool = False) RL_State[source]
Create an RL_State instance.
- compute_quality_of_class_expression(state: RL_State) None[source]
Compute Quality of owl class expression. # (1) Perform concept retrieval # (2) Compute the quality w.r.t. (1), positive and negative examples # (3) Increment the number of tested concepts attribute.
- sequence_of_actions(root_rl_state: RL_State) Tuple[List[Tuple[RL_State, RL_State]], List[SupportsFloat]][source]
Performing sequence of actions in an RL env whose root state is ⊤
- form_experiences(state_pairs: List, rewards: List) None[source]
Form experiences from a sequence of concepts and corresponding rewards.
state_pairs - A list of tuples containing two consecutive states. reward - A list of reward.
Gamma is 1.
Return X - A list of embeddings of current concept, next concept, positive examples, negative examples. y - Argmax Q value.
- update_search(concepts, predicted_Q_values=None)[source]
@param concepts: @param predicted_Q_values: @return:
- assign_embeddings(rl_state: RL_State) None[source]
Assign embeddings to a rl state. A rl state is represented with vector representation of all individuals belonging to a respective OWLClassExpression.
- exploration_exploitation_tradeoff(current_state: AbstractNode, next_states: List[AbstractNode]) AbstractNode[source]
Exploration vs Exploitation tradeoff at finding next state. (1) Exploration. (2) Exploitation.
- exploitation(current_state: AbstractNode, next_states: List[AbstractNode]) RL_State[source]
Find next node that is assigned with highest predicted Q value.
Predict Q values : predictions.shape => torch.Size([n, 1]) where n = len(next_states).
Find the index of max value in predictions.
Use the index to obtain next state.
Return next state.
- predict_values(current_state: RL_State, next_states: List[RL_State]) torch.Tensor[source]
Predict promise of next states given current state.
- Returns:
Predicted Q values.
- generate_learning_problems(num_of_target_concepts, num_learning_problems) List[Tuple[str, Set, Set]][source]
Generate learning problems if none is provided.
Time complexity: O(n^2) n = named concepts
- learn_from_illustration(sequence_of_goal_path: List[RL_State])[source]
- Parameters:
sequence_of_goal_path – ⊤,Parent,Parent ⊓ Daughter.
- best_hypotheses(n=1, return_node: bool = False) owlapy.class_expression.OWLClassExpression | List[owlapy.class_expression.OWLClassExpression][source]
Get the current best found hypotheses according to the quality.
- Parameters:
n – Maximum number of results.
- Returns:
Iterable with hypotheses in form of search tree nodes.
- next_node_to_expand() RL_State[source]
Return a node that maximizes the heuristic function at time t.
- downward_refinement(*args, **kwargs)[source]
Execute one refinement step of a refinement based learning algorithm.
- Parameters:
node (_N) – the search tree node on which to refine.
- Returns:
Refinement results as new search tree nodes (they still need to be added to the tree).
- Return type:
Iterable[_N]
- show_search_tree(heading_step: str, top_n: int = 10) None[source]
A debugging function to print out the current search tree and the current n best found hypotheses to standard output.
- Parameters:
heading_step – A message to display at the beginning of the output.
top_n – The number of current best hypotheses to print out.
- class ontolearn.learners.EvoLearner(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, quality_func: AbstractScorer | None = None, fitness_func: AbstractFitness | None = None, init_method: AbstractEAInitialization | None = None, algorithm: AbstractEvolutionaryAlgorithm | None = None, mut_uniform_gen: AbstractEAInitialization | None = None, value_splitter: AbstractValueSplitter | None = None, terminate_on_goal: bool | None = None, max_runtime: int | None = None, use_data_properties: bool = True, use_card_restrictions: bool = True, use_inverse: bool = False, tournament_size: int = 7, card_limit: int = 10, population_size: int = 800, num_generations: int = 200, height_limit: int = 17)[source]
Bases:
ontolearn.learners.base.BaseConceptLearnerAn evolutionary approach to learn concepts in ALCQ(D).
- algorithm
The evolutionary algorithm.
- card_limit
The upper cardinality limit if using cardinality restriction on object properties.
- Type:
int
- fitness_func
Fitness function.
- Type:
- height_limit
The maximum value allowed for the height of the Crossover and Mutation operations.
- Type:
int
- init_method
The evolutionary algorithm initialization method.
- Type:
- kb
The knowledge base that the concept learner is using.
- Type:
- max_num_of_concepts_tested
Limit to stop the algorithm after n concepts tested.
- Type:
int
- max_runtime
max_runtime: Limit to stop the algorithm after n seconds.
- Type:
int
- mut_uniform_gen
The initialization method to create the tree for mutation operation.
- Type:
- name
Name of the model = ‘evolearner’.
- Type:
str
- num_generations
Number of generation for the evolutionary algorithm.
- Type:
int
- _number_of_tested_concepts
Yes, you got it. This stores the number of tested concepts.
- Type:
int
- population_size
Population size for the evolutionary algorithm.
- Type:
int
- pset
Contains the primitives that can be used to solve a Strongly Typed GP problem.
- Type:
gp.PrimitiveSetTyped
- quality_func
Function to evaluate the quality of solution concepts.
- reasoner
The reasoner that this model is using.
- Type:
AbstractOWLReasoner
- start_time
The time when
fit()starts the execution. Used to calculate the total timefit()takes to execute.- Type:
float
- terminate_on_goal
Whether to stop the algorithm if a perfect solution is found.
- Type:
bool
- toolbox
A toolbox for evolution that contains the evolutionary operators.
- Type:
base.Toolbox
- tournament_size
The number of evolutionary individuals participating in each tournament.
- Type:
int
- use_card_restrictions
Use cardinality restriction for object properties?
- Type:
bool
- use_data_properties
Consider data properties?
- Type:
bool
- use_inverse
Consider inversed concepts?
- Type:
bool
- value_splitter
Used to calculate the splits for data properties values.
- Type:
- __slots__ = ('fitness_func', 'init_method', 'algorithm', 'value_splitter', 'tournament_size',...
- name = 'evolearner'
- fitness_func: AbstractFitness
- init_method: AbstractEAInitialization
- algorithm: AbstractEvolutionaryAlgorithm
- mut_uniform_gen: AbstractEAInitialization
- value_splitter: AbstractValueSplitter
- use_data_properties: bool
- use_card_restrictions: bool
- use_inverse: bool
- tournament_size: int
- card_limit: int
- population_size: int
- num_generations: int
- height_limit: int
- generator: ConceptGenerator
- pset: deap.gp.PrimitiveSetTyped
- toolbox: deap.base.Toolbox
- reasoner = None
- total_fits = 0
- register_op(alias: str, function: Callable, *args, **kargs)[source]
Register a function in the toolbox under the name alias. You may provide default arguments that will be passed automatically when calling the registered function. Fixed arguments can then be overriden at function call time.
- Parameters:
alias – The name the operator will take in the toolbox. If the alias already exist it will overwrite the operator already present.
function – The function to which refer the alias.
args – One or more argument (and keyword argument) to pass automatically to the registered function when called, optional.
- fit(*args, **kwargs) EvoLearner[source]
Find hypotheses that explain pos and neg.
- best_hypotheses(n: int = 1, key: str = 'fitness', return_node: bool = False) owlapy.class_expression.OWLClassExpression | Iterable[owlapy.class_expression.OWLClassExpression][source]
Get the current best found hypotheses according to the quality.
- Parameters:
n – Maximum number of results.
- Returns:
Iterable with hypotheses in form of search tree nodes.
- class ontolearn.learners.NCES(knowledge_base, nces2_or_roces=False, quality_func: AbstractScorer | None = None, num_predictions=5, learner_names=['SetTransformer', 'LSTM', 'GRU'], path_of_embeddings=None, path_temp_embeddings=None, path_of_trained_models=None, auto_train=True, proj_dim=128, rnn_n_layers=2, drop_prob=0.1, num_heads=4, num_seeds=1, m=32, ln=False, dicee_model='DeCaL', dicee_epochs=5, dicee_lr=0.01, dicee_emb_dim=128, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, batch_size=256, num_workers=4, max_length=48, load_pretrained=True, sorted_examples=False, verbose: int = 0, enforce_validity: bool | None = None)[source]
Bases:
ontolearn.base_nces.BaseNCESNeural Class Expression Synthesis.
- name = 'NCES'
- knowledge_base
- learner_names = ['SetTransformer', 'LSTM', 'GRU']
- path_of_embeddings = None
- path_temp_embeddings = None
- path_of_trained_models = None
- dicee_model = 'DeCaL'
- dicee_emb_dim = 128
- dicee_epochs = 5
- dicee_lr = 0.01
- rnn_n_layers = 2
- sorted_examples = False
- has_renamed_inds = False
- enforce_validity = None
- fit_one(pos: List[owlapy.owl_individual.OWLNamedIndividual] | List[str], neg: List[owlapy.owl_individual.OWLNamedIndividual] | List[str])[source]
- fit(learning_problem: PosNegLPStandard, **kwargs)[source]
- best_hypotheses(n=1, return_node: bool = False) owlapy.class_expression.OWLClassExpression | Iterable[owlapy.class_expression.OWLClassExpression] | AbstractNode | Iterable[AbstractNode] | None[source]
- fit_from_iterable(dataset: List[Tuple[str, Set[owlapy.owl_individual.OWLNamedIndividual], Set[owlapy.owl_individual.OWLNamedIndividual]]] | List[Tuple[str, Set[str], Set[str]]], shuffle_examples=False, verbose=False, **kwargs) List[source]
Dataset is a list of tuples where the first items are strings corresponding to target concepts.
This function returns predictions as owl class expressions, not nodes as in fit
- train(data: Iterable[List[Tuple]] = None, epochs=50, batch_size=64, max_num_lps=1000, refinement_expressivity=0.2, refs_sample_size=50, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, num_workers=8, save_model=True, storage_path=None, optimizer='Adam', record_runtime=True, example_sizes=None, shuffle_examples=False)[source]
- class ontolearn.learners.NCES2(knowledge_base, nces2_or_roces=True, quality_func: AbstractScorer | None = None, num_predictions=5, path_of_trained_models=None, auto_train=True, proj_dim=128, drop_prob=0.1, num_heads=4, num_seeds=1, m=[32, 64, 128], ln=False, embedding_dim=128, sampling_strategy='nces2', input_dropout=0.0, feature_map_dropout=0.1, kernel_size=4, num_of_output_channels=32, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, batch_size=256, num_workers=4, max_length=48, load_pretrained=True, verbose: int = 0, data=[], enforce_validity: bool | None = None)[source]
Bases:
ontolearn.base_nces.BaseNCESNeural Class Expression Synthesis in ALCHIQ(D).
- name = 'NCES2'
- knowledge_base
- knowledge_base_path
- triples_data
- num_entities
- num_relations
- path_of_trained_models = None
- embedding_dim = 128
- sampling_strategy = 'nces2'
- input_dropout = 0.0
- feature_map_dropout = 0.1
- kernel_size = 4
- num_of_output_channels = 32
- num_workers = 4
- enforce_validity = None
- fit_one(pos: List[owlapy.owl_individual.OWLNamedIndividual] | List[str], neg: List[owlapy.owl_individual.OWLNamedIndividual] | List[str])[source]
- fit(learning_problem: PosNegLPStandard, **kwargs)[source]
- best_hypotheses(n=1, return_node: bool = False) owlapy.class_expression.OWLClassExpression | Iterable[owlapy.class_expression.OWLClassExpression] | AbstractNode | Iterable[AbstractNode] | None[source]
- fit_from_iterable(data: List[Tuple[str, Set[owlapy.owl_individual.OWLNamedIndividual], Set[owlapy.owl_individual.OWLNamedIndividual]]] | List[Tuple[str, Set[str], Set[str]]], shuffle_examples=False, verbose=False, **kwargs) List[source]
data is a list of tuples where the first items are strings corresponding to target concepts.
This function returns predictions as owl class expressions, not nodes as in fit
- train(data: Iterable[List[Tuple]] = None, epochs=50, batch_size=64, max_num_lps=1000, refinement_expressivity=0.2, refs_sample_size=50, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, num_workers=8, save_model=True, storage_path=None, optimizer='Adam', record_runtime=True, shuffle_examples=False)[source]
- class ontolearn.learners.NERO(knowledge_base: KnowledgeBase, namespace=None, num_embedding_dim: int = 50, neural_architecture: str = 'DeepSet', learning_rate: float = 0.001, num_epochs: int = 100, batch_size: int = 32, num_workers: int = 4, quality_func=None, max_runtime: int | None = 10, verbose: int = 0)[source]
NERO - Neural Class Expression Learning with Reinforcement.
NERO combines neural networks with symbolic reasoning for learning OWL class expressions. It uses set-based neural architectures (DeepSet or SetTransformer) to predict quality scores for candidate class expressions.
- Parameters:
knowledge_base – The knowledge base to learn from
num_embedding_dim – Dimensionality of entity embeddings (default: 50)
neural_architecture – Neural architecture to use (‘DeepSet’ or ‘SetTransformer’, default: ‘DeepSet’)
learning_rate – Learning rate for training (default: 0.001)
num_epochs – Number of training epochs (default: 100)
batch_size – Batch size for training (default: 32)
num_workers – Number of workers for data loading (default: 4)
quality_func – Quality function for evaluating expressions (default: F1-score)
max_runtime – Maximum runtime in seconds (default: None)
verbose – Verbosity level (default: 0)
- name = 'NERO'
- kb
- ns = None
- num_embedding_dim = 50
- neural_architecture = 'DeepSet'
- learning_rate = 0.001
- num_epochs = 100
- batch_size = 32
- num_workers = 4
- max_runtime = 10
- verbose = 0
- search_tree
- refinement_op = None
- device
- model = None
- instance_idx_mapping = None
- idx_to_instance_mapping = None
- target_class_expressions = None
- expression
- train(learning_problems: List[Tuple[List[str], List[str]]])[source]
Train the NERO model on learning problems.
- Parameters:
learning_problems – List of (positive_examples, negative_examples) tuples
- search(pos: Set[str], neg: Set[str], top_k: int = 10, max_child_length: int = 10, max_queue_size: int = 10000) Dict[source]
Perform reinforcement learning-based search for complex class expressions. Uses neural predictions to initialize and guide the search.
- search_with_smart_init(pos: Set[str], neg: Set[str], top_k: int = 10) Dict[source]
Search with smart initialization from neural predictions (model.py compatible). This uses neural model predictions to guide the symbolic refinement search.
- fit(learning_problem: PosNegLPStandard, max_runtime: int | None = None)[source]
Fit the model to a learning problem (Ontolearn-compatible interface). This now includes training the neural model and performing the search.
- best_hypothesis() str | None[source]
Return the best hypothesis (Ontolearn-compatible interface).
- Returns:
The best predicted class expression as a string
- best_hypothesis_quality() float[source]
Return the quality of the best hypothesis.
- Returns:
The F-measure/quality of the best prediction
- forward(xpos: torch.Tensor, xneg: torch.Tensor) torch.Tensor[source]
Forward pass through the neural model.
- Parameters:
xpos – Tensor of positive example indices
xneg – Tensor of negative example indices
- Returns:
Predictions for target class expressions
- positive_expression_embeddings(individuals: List[str]) torch.Tensor[source]
Get embeddings for positive individuals.
- Parameters:
individuals – List of individual URIs
- Returns:
Tensor of embeddings
- negative_expression_embeddings(individuals: List[str]) torch.Tensor[source]
Get embeddings for negative individuals.
- Parameters:
individuals – List of individual URIs
- Returns:
Tensor of embeddings
- downward_refine(expression, max_length: int | None = None) Set[source]
Top-down/downward refinement operator from original NERO.
This implements the refinement logic from model.py: ∀s ∈ StateSpace : ρ(s) ⊆ {s^i ∈ StateSpace | s^i ⊑ s}
- Parameters:
expression – Expression to refine
max_length – Maximum length constraint for refinements
- Returns:
Set of refined expressions
- upward_refine(expression) Set[source]
Bottom-up/upward refinement operator from original NERO.
This implements the generalization logic: ∀s ∈ StateSpace : ρ(s) ⊆ {s^i ∈ StateSpace | s ⊑ s^i}
- Parameters:
expression – Expression to generalize
- Returns:
Set of generalized expressions
- search_with_init(top_prediction_queue: SearchTree, set_pos: Set[str], set_neg: Set[str]) SearchTree[source]
Standard search with smart initialization (from original model.py).
This is the key search method that combines neural predictions with symbolic refinement.
- Parameters:
top_prediction_queue – Priority queue initialized with neural predictions
set_pos – Set of positive examples
set_neg – Set of negative examples
- Returns:
SearchTree with explored and refined expressions
- fit_from_iterable(pos: List[str], neg: List[str], top_k: int = 10, use_search: str = 'SmartInit') Dict[source]
Fit method compatible with original NERO’s model.py interface.
This implements the complete prediction pipeline from the original NERO: 1. Neural prediction to get top-k candidates 2. Quality evaluation 3. Optional symbolic search for refinement
- Parameters:
pos – List of positive example URIs
neg – List of negative example URIs
top_k – Number of top neural predictions to consider
use_search – Search strategy (‘SmartInit’, ‘None’, or None)
- Returns:
Dictionary with prediction results
- class ontolearn.learners.OCEL(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, refinement_operator: BaseRefinement[OENode] | None = None, quality_func: AbstractScorer | None = None, heuristic_func: AbstractHeuristic | None = None, terminate_on_goal: bool | None = None, iter_bound: int | None = None, max_num_of_concepts_tested: int | None = None, max_runtime: int | None = None, max_results: int = 10, best_only: bool = False, calculate_min_max: bool = True)[source]
Bases:
ontolearn.learners.celoe.CELOEA limited version of CELOE.
- best_descriptions
Best hypotheses ordered.
- Type:
EvaluatedDescriptionSet[OENode, QualityOrderedNode]
- best_only
If False pick only nodes with quality < 1.0, else pick without quality restrictions.
- Type:
bool
- calculate_min_max
Calculate minimum and maximum horizontal expansion? Statistical purpose only.
- Type:
bool
- heuristic_func
Function to guide the search heuristic.
- Type:
- iter_bound
Limit to stop the algorithm after n refinement steps are done.
- Type:
int
- kb
The knowledge base that the concept learner is using.
- Type:
- max_child_length
Limit the length of concepts generated by the refinement operator.
- Type:
int
- max_he
Maximal value of horizontal expansion.
- Type:
int
- max_num_of_concepts_tested
- Type:
int
- max_runtime
Limit to stop the algorithm after n seconds.
- Type:
int
- min_he
Minimal value of horizontal expansion.
- Type:
int
- name
Name of the model = ‘ocel_python’.
- Type:
str
- _number_of_tested_concepts
Yes, you got it. This stores the number of tested concepts.
- Type:
int
- operator
Operator used to generate refinements.
- Type:
- quality_func
- Type:
- reasoner
The reasoner that this model is using.
- Type:
AbstractOWLReasoner
- search_tree
Dict to store the TreeNode for a class expression.
- start_class
The starting class expression for the refinement operation.
- Type:
OWLClassExpression
- start_time
The time when
fit()starts the execution. Used to calculate the total timefit()takes to execute.- Type:
float
- terminate_on_goal
Whether to stop the algorithm if a perfect solution is found.
- Type:
bool
- __slots__ = ()
- name = 'ocel_python'
- class ontolearn.learners.ROCES(knowledge_base, nces2_or_roces=True, quality_func: AbstractScorer | None = None, num_predictions=5, k=5, path_of_trained_models=None, auto_train=True, proj_dim=128, rnn_n_layers=2, drop_prob=0.1, num_heads=4, num_seeds=1, m=[32, 64, 128], ln=False, embedding_dim=128, sampling_strategy='p', input_dropout=0.0, feature_map_dropout=0.1, kernel_size=4, num_of_output_channels=32, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, batch_size=256, num_workers=4, max_length=48, load_pretrained=True, verbose: int = 0, data=[], enforce_validity: bool | None = None)[source]
Bases:
ontolearn.learners.nces2.NCES2Robust Class Expression Synthesis in Description Logics via Iterative Sampling.
- name = 'ROCES'
- k = 5
- enforce_validity = None
- class ontolearn.learners.SPARQLQueryLearner(learning_problem: PosNegLPStandard, endpoint_url: str, max_number_of_filters: int = 3, use_complex_filters: bool = True)[source]
Learning SPARQL queries: Given a description logic concept (potentially generated by a concept learner), try to improve the fittness (e.g., F1) of the corresponding SPARQL query.
- Attributes:
name (str): Name of the model = ‘SPARQL Query Learner’ endpoint_url (string): The URL of the SPARQL endpoint to use max_number_of_filters (int): Limit the number of filters combined during the improvement process learning_problem (PosNegLPStandard): the learning problem (sets of positive and negative examples) uses_complex_filters (bool): Denotes whether the learner uses complex filters
(i.e., makes use of the values of data properties) to improve the quality
_root_var (str): The root variable to be used in the OWL2SPARQL conversion _possible_filters (List[str]): A list of possible FILTERs to use to improve the quality
- __slots__ = ('endpoint_url', 'max_number_of_filters', 'uses_complex_filters', 'learning_problem',...
- name = 'SPARQL Query Learner'
- endpoint_url: str
- max_number_of_filters: int
- learning_problem: PosNegLPStandard
- uses_complex_filters: bool
- class ontolearn.learners.SPELL(knowledge_base: AbstractKnowledgeBase, reasoner: owlapy.abstracts.AbstractOWLReasoner | None = None, max_runtime: int | None = 60, max_query_size: int = 10, starting_query_size: int = 1, search_mode: str = 'full_approx')[source]
Bases:
ontolearn.learners.sat_base.SATBaseLearnerSPELL: SAT-based concept learner using general SPELL fitting.
This learner uses SAT solvers to find concept expressions that fit positive and negative examples. Unlike ALCSAT which is specialized for ALC, SPELL uses the more general fitting.py module which supports different modes of operation.
The algorithm incrementally searches for queries of increasing size that maximize the coverage on the given examples.
- kb
The knowledge base that the concept learner is using.
- Type:
- max_query_size
Maximum size of queries to search for.
- Type:
int
- search_mode
Search mode - exact, neg_approx, or full_approx.
- _best_hypothesis
Best found hypothesis.
- Type:
OWLClassExpression
- _best_hypothesis_accuracy
Accuracy of the best hypothesis.
- Type:
float
- _ind_to_owl
Mapping from internal individual indices to OWL individuals.
- Type:
dict
- _owl_to_ind
Mapping from OWL individuals to internal indices.
- Type:
dict
- __slots__ = ('max_query_size', 'starting_query_size', 'search_mode')
- name = 'spell'
- max_query_size = 10
- starting_query_size = 1
- search_mode
- fit(lp: PosNegLPStandard)[source]
Find concept expressions that explain positive and negative examples.
- Parameters:
lp – Learning problem with positive and negative examples.
- Returns:
self
- class ontolearn.learners.TDL(knowledge_base, use_inverse: bool = True, use_data_properties: bool = True, use_nominals: bool = True, use_card_restrictions: bool = True, kwargs_classifier: dict = None, max_runtime: int = 1, grid_search_over: dict = None, grid_search_apply: bool = False, kwargs_grid_search: dict = {}, report_classification: bool = True, plot_tree: bool = False, plot_embeddings: bool = False, plot_feature_importance: bool = False, verbose: int = 10, verbalize: bool = False)[source]
Tree-based Description Logic Concept Learner
- use_inverse = True
- use_data_properties = True
- use_nominals = True
- use_card_restrictions = True
- verbose = 10
- grid_search_over = None
- kwargs_grid_search
- knowledge_base
- report_classification = True
- plot_tree = False
- plot_embeddings = False
- plot_feature_importance = False
- clf = None
- kwargs_classifier
- max_runtime = 1
- features = None
- disjunction_of_conjunctive_concepts = None
- conjunctive_concepts = None
- owl_class_expressions
- cbd_mapping: Dict[str, Set[Tuple[str, str]]]
- types_of_individuals
- verbalize = False
- data_property_cast
- X = None
- y = None
- extract_expressions_from_owl_individuals(individuals: List[owlapy.owl_individual.OWLNamedIndividual]) Tuple[numpy.ndarray, List[owlapy.class_expression.OWLClassExpression]][source]
- create_training_data(learning_problem: PosNegLPStandard) Tuple[pandas.DataFrame, pandas.DataFrame][source]
- construct_owl_expression_from_tree(X: pandas.DataFrame, y: pandas.DataFrame) List[owlapy.class_expression.OWLObjectIntersectionOf][source]
Construct an OWL class expression from a decision tree
- fit(learning_problem: PosNegLPStandard = None, max_runtime: int = None)[source]
Fit the learner to the given learning problem
Extract multi-hop information about E^+ and E^-.
Create OWL Class Expressions from (1)
Build a binary sparse training data X where first |E+| rows denote the binary representations of positives Remaining rows denote the binary representations of E⁻
(4) Create binary labels. (4) Construct a set of DL concept for each e in E^+ (5) Union (4)
- Parameters:
learning_problem – The learning problem
:param max_runtime:total runtime of the learning
- property classification_report: str