ontolearn.nero_architectures
Classes
Multi-head Attention Block. |
|
Self-Attention Block. |
|
Induced Self-Attention Block. |
|
Pooling by Multihead Attention. |
|
Set Transformer architecture. |
|
DeepSet neural architecture for set-based learning. |
|
Set Transformer based architecture. |
Module Contents
- class ontolearn.nero_architectures.MAB(dim_Q, dim_K, dim_V, num_heads, ln=False)[source]
Bases:
torch.nn.ModuleMulti-head Attention Block.
- dim_V
- num_heads
- fc_q
- fc_k
- fc_v
- fc_o
- class ontolearn.nero_architectures.SAB(dim_in, dim_out, num_heads, ln=False)[source]
Bases:
torch.nn.ModuleSelf-Attention Block.
- mab
- class ontolearn.nero_architectures.ISAB(dim_in, dim_out, num_heads, num_inds, ln=False)[source]
Bases:
torch.nn.ModuleInduced Self-Attention Block.
- I
- mab0
- mab1
- class ontolearn.nero_architectures.PMA(dim, num_heads, num_seeds, ln=False)[source]
Bases:
torch.nn.ModulePooling by Multihead Attention.
- S
- mab
- class ontolearn.nero_architectures.SetTransformer(dim_input, num_outputs, dim_output, num_inds=32, dim_hidden=128, num_heads=4, ln=False)[source]
Bases:
torch.nn.ModuleSet Transformer architecture.
- enc
- dec
- class ontolearn.nero_architectures.DeepSet(num_instances: int, num_embedding_dim: int, num_outputs: int)[source]
Bases:
torch.nn.ModuleDeepSet neural architecture for set-based learning.
- name = 'DeepSet'
- num_instances
- num_embedding_dim
- num_outputs
- embeddings
- fc0
- fc1
- class ontolearn.nero_architectures.SetTransformerNet(num_instances: int, num_embedding_dim: int, num_outputs: int)[source]
Bases:
torch.nn.ModuleSet Transformer based architecture.
- name = 'ST'
- num_instances
- num_embedding_dim
- num_outputs
- embeddings
- set_transformer_negative
- set_transformer_positive