dicee.knowledge_graph_embeddings ================================ .. py:module:: dicee.knowledge_graph_embeddings Classes ------- .. autoapisummary:: dicee.knowledge_graph_embeddings.KGE Module Contents --------------- .. py:class:: KGE(path=None, url=None, construct_ensemble=False, model_name=None) Bases: :py:obj:`dicee.abstracts.BaseInteractiveKGE` Knowledge Graph Embedding Class for interactive usage of pre-trained models .. py:method:: __str__() .. py:method:: to(device: str) -> None .. py:method:: get_transductive_entity_embeddings(indices: Union[torch.LongTensor, List[str]], as_pytorch=False, as_numpy=False, as_list=True) -> Union[torch.FloatTensor, numpy.ndarray, List[float]] .. py:method:: create_vector_database(collection_name: str, distance: str, location: str = 'localhost', port: int = 6333) .. py:method:: generate(h='', r='') .. py:method:: eval_lp_performance(dataset=List[Tuple[str, str, str]], filtered=True) .. py:method:: predict_missing_head_entity(relation: Union[List[str], str], tail_entity: Union[List[str], str], within=None) -> Tuple Given a relation and a tail entity, return top k ranked head entity. argmax_{e \in E } f(e,r,t), where r \in R, t \in E. Parameter --------- relation: Union[List[str], str] String representation of selected relations. tail_entity: Union[List[str], str] String representation of selected entities. k: int Highest ranked k entities. Returns: Tuple --------- Highest K scores and entities .. py:method:: predict_missing_relations(head_entity: Union[List[str], str], tail_entity: Union[List[str], str], within=None) -> Tuple Given a head entity and a tail entity, return top k ranked relations. argmax_{r \in R } f(h,r,t), where h, t \in E. Parameter --------- head_entity: List[str] String representation of selected entities. tail_entity: List[str] String representation of selected entities. k: int Highest ranked k entities. Returns: Tuple --------- Highest K scores and entities .. py:method:: predict_missing_tail_entity(head_entity: Union[List[str], str], relation: Union[List[str], str], within: List[str] = None) -> torch.FloatTensor Given a head entity and a relation, return top k ranked entities argmax_{e \in E } f(h,r,e), where h \in E and r \in R. Parameter --------- head_entity: List[str] String representation of selected entities. tail_entity: List[str] String representation of selected entities. Returns: Tuple --------- scores .. py:method:: predict(*, h: Union[List[str], str] = None, r: Union[List[str], str] = None, t: Union[List[str], str] = None, within=None, logits=True) -> torch.FloatTensor :param logits: :param h: :param r: :param t: :param within: .. py:method:: predict_topk(*, h: Union[str, List[str]] = None, r: Union[str, List[str]] = None, t: Union[str, List[str]] = None, topk: int = 10, within: List[str] = None) Predict missing item in a given triple. Parameter --------- head_entity: Union[str, List[str]] String representation of selected entities. relation: Union[str, List[str]] String representation of selected relations. tail_entity: Union[str, List[str]] String representation of selected entities. k: int Highest ranked k item. Returns: Tuple --------- Highest K scores and items .. py:method:: triple_score(h: Union[List[str], str] = None, r: Union[List[str], str] = None, t: Union[List[str], str] = None, logits=False) -> torch.FloatTensor Predict triple score Parameter --------- head_entity: List[str] String representation of selected entities. relation: List[str] String representation of selected relations. tail_entity: List[str] String representation of selected entities. logits: bool If logits is True, unnormalized score returned Returns: Tuple --------- pytorch tensor of triple score .. py:method:: t_norm(tens_1: torch.Tensor, tens_2: torch.Tensor, tnorm: str = 'min') -> torch.Tensor .. py:method:: tensor_t_norm(subquery_scores: torch.FloatTensor, tnorm: str = 'min') -> torch.FloatTensor Compute T-norm over [0,1] ^{n imes d} where n denotes the number of hops and d denotes number of entities .. py:method:: t_conorm(tens_1: torch.Tensor, tens_2: torch.Tensor, tconorm: str = 'min') -> torch.Tensor .. py:method:: negnorm(tens_1: torch.Tensor, lambda_: float, neg_norm: str = 'standard') -> torch.Tensor .. py:method:: return_multi_hop_query_results(aggregated_query_for_all_entities, k: int, only_scores) .. py:method:: single_hop_query_answering(query: tuple, only_scores: bool = True, k: int = None) .. py:method:: answer_multi_hop_query(query_type: str = None, query: Tuple[Union[str, Tuple[str, str]], Ellipsis] = None, queries: List[Tuple[Union[str, Tuple[str, str]], Ellipsis]] = None, tnorm: str = 'prod', neg_norm: str = 'standard', lambda_: float = 0.0, k: int = 10, only_scores=False) -> List[Tuple[str, torch.Tensor]] # @TODO: Refactoring is needed # @TODO: Score computation for each query type should be done in a static function Find an answer set for EPFO queries including negation and disjunction Parameter ---------- query_type: str The type of the query, e.g., "2p". query: Union[str, Tuple[str, Tuple[str, str]]] The query itself, either a string or a nested tuple. queries: List of Tuple[Union[str, Tuple[str, str]], ...] tnorm: str The t-norm operator. neg_norm: str The negation norm. lambda_: float lambda parameter for sugeno and yager negation norms k: int The top-k substitutions for intermediate variables. :returns: * *List[Tuple[str, torch.Tensor]]* * *Entities and corresponding scores sorted in the descening order of scores* .. py:method:: find_missing_triples(confidence: float, entities: List[str] = None, relations: List[str] = None, topk: int = 10, at_most: int = sys.maxsize) -> Set Find missing triples Iterative over a set of entities E and a set of relation R : orall e \in E and orall r \in R f(e,r,x) Return (e,r,x) ot\in G and f(e,r,x) > confidence Parameter --------- confidence: float A threshold for an output of a sigmoid function given a triple. topk: int Highest ranked k item to select triples with f(e,r,x) > confidence . at_most: int Stop after finding at_most missing triples Returns: Set --------- {(e,r,x) | f(e,r,x) > confidence \land (e,r,x) ot\in G .. py:method:: deploy(share: bool = False, top_k: int = 10) .. py:method:: train_triples(h: List[str], r: List[str], t: List[str], labels: List[float], iteration=2, optimizer=None) .. py:method:: train_k_vs_all(h, r, iteration=1, lr=0.001) Train k vs all :param head_entity: :param relation: :param iteration: :param lr: :return: .. py:method:: train(kg, lr=0.1, epoch=10, batch_size=32, neg_sample_ratio=10, num_workers=1) -> None Retrained a pretrain model on an input KG via negative sampling.