ontolearn.learners.nces2
NCES2: Neural Class Expression Synthesis in ALCHIQ(D).
Classes
Neural Class Expression Synthesis in ALCHIQ(D). |
Module Contents
- class ontolearn.learners.nces2.NCES2(knowledge_base, nces2_or_roces=True, quality_func: AbstractScorer | None = None, num_predictions=5, path_of_trained_models=None, auto_train=True, proj_dim=128, drop_prob=0.1, num_heads=4, num_seeds=1, m=[32, 64, 128], ln=False, embedding_dim=128, sampling_strategy='nces2', input_dropout=0.0, feature_map_dropout=0.1, kernel_size=4, num_of_output_channels=32, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, batch_size=256, num_workers=4, max_length=48, load_pretrained=True, verbose: int = 0, data=[], enforce_validity: bool | None = None)[source]
Bases:
ontolearn.base_nces.BaseNCESNeural Class Expression Synthesis in ALCHIQ(D).
- name = 'NCES2'
- knowledge_base
- knowledge_base_path
- triples_data
- num_entities
- num_relations
- path_of_trained_models = None
- embedding_dim = 128
- sampling_strategy = 'nces2'
- input_dropout = 0.0
- feature_map_dropout = 0.1
- kernel_size = 4
- num_of_output_channels = 32
- num_workers = 4
- enforce_validity = None
- fit_one(pos: List[owlapy.owl_individual.OWLNamedIndividual] | List[str], neg: List[owlapy.owl_individual.OWLNamedIndividual] | List[str])[source]
- fit(learning_problem: PosNegLPStandard, **kwargs)[source]
- best_hypotheses(n=1, return_node: bool = False) owlapy.class_expression.OWLClassExpression | Iterable[owlapy.class_expression.OWLClassExpression] | AbstractNode | Iterable[AbstractNode] | None[source]
- fit_from_iterable(data: List[Tuple[str, Set[owlapy.owl_individual.OWLNamedIndividual], Set[owlapy.owl_individual.OWLNamedIndividual]]] | List[Tuple[str, Set[str], Set[str]]], shuffle_examples=False, verbose=False, **kwargs) List[source]
data is a list of tuples where the first items are strings corresponding to target concepts.
This function returns predictions as owl class expressions, not nodes as in fit
- train(data: Iterable[List[Tuple]] = None, epochs=50, batch_size=64, max_num_lps=1000, refinement_expressivity=0.2, refs_sample_size=50, learning_rate=0.0001, tmax=20, eta_min=1e-05, clip_value=5.0, num_workers=8, save_model=True, storage_path=None, optimizer='Adam', record_runtime=True, shuffle_examples=False)[source]