dicee.models.function_space
Classes
Learning Knowledge Neural Graphs |
|
Learning Knowledge Neural Graphs |
|
Learning Knowledge Neural Graphs |
|
Embedding with trigonometric functions. We represent all entities and relations in the complex number space as: |
|
Embedding with polynomial functions. We represent all entities and relations in the polynomial space as: |
Module Contents
- class dicee.models.function_space.FMult(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Learning Knowledge Neural Graphs
- name = 'FMult'
- entity_embeddings
- relation_embeddings
- k
- num_sample = 50
- gamma
- roots
- weights
- class dicee.models.function_space.GFMult(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Learning Knowledge Neural Graphs
- name = 'GFMult'
- entity_embeddings
- relation_embeddings
- k
- num_sample = 250
- roots
- weights
- class dicee.models.function_space.FMult2(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Learning Knowledge Neural Graphs
- name = 'FMult2'
- n_layers = 3
- k
- n = 50
- score_func = 'compositional'
- discrete_points
- entity_embeddings
- relation_embeddings
- class dicee.models.function_space.LFMult1(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Embedding with trigonometric functions. We represent all entities and relations in the complex number space as: f(x) = sum_{k=0}^{k=d-1}wk e^{kix}. and use the three differents scoring function as in the paper to evaluate the score
- name = 'LFMult1'
- entity_embeddings
- relation_embeddings
- class dicee.models.function_space.LFMult(args)[source]
Bases:
dicee.models.base_model.BaseKGE
Embedding with polynomial functions. We represent all entities and relations in the polynomial space as: f(x) = sum_{i=0}^{d-1} a_k x^{i%d} and use the three differents scoring function as in the paper to evaluate the score. We also consider combining with Neural Networks.
- name = 'LFMult'
- entity_embeddings
- relation_embeddings
- degree
- m
- x_values
- poly_NN(x, coefh, coefr, coeft)[source]
Constructing a 2 layers NN to represent the embeddings. h = sigma(wh^T x + bh ), r = sigma(wr^T x + br ), t = sigma(wt^T x + bt )
- scalar_batch_NN(a, b, c)[source]
element wise multiplication between a,b and c: Inputs : a, b, c ====> torch.tensor of size batch_size x m x d Output : a tensor of size batch_size x d
- tri_score(coeff_h, coeff_r, coeff_t)[source]
this part implement the trilinear scoring techniques:
score(h,r,t) = int_{0}{1} h(x)r(x)t(x) dx = sum_{i,j,k = 0}^{d-1} dfrac{a_i*b_j*c_k}{1+(i+j+k)%d}
generate the range for i,j and k from [0 d-1]
2. perform dfrac{a_i*b_j*c_k}{1+(i+j+k)%d} in parallel for every batch
take the sum over each batch
- vtp_score(h, r, t)[source]
this part implement the vector triple product scoring techniques:
score(h,r,t) = int_{0}{1} h(x)r(x)t(x) dx = sum_{i,j,k = 0}^{d-1} dfrac{a_i*c_j*b_k - b_i*c_j*a_k}{(1+(i+j)%d)(1+k)}
generate the range for i,j and k from [0 d-1]
Compute the first and second terms of the sum
Multiply with then denominator and take the sum
take the sum over each batch
- comp_func(h, r, t)[source]
this part implement the function composition scoring techniques: i.e. score = <hor, t>
- polynomial(coeff, x, degree)[source]
This function takes a matrix tensor of coefficients (coeff), a tensor vector of points x and range of integer [0,1,…d] and return a vector tensor (coeff[0][0] + coeff[0][1]x +…+ coeff[0][d]x^d,
- coeff[1][0] + coeff[1][1]x +…+ coeff[1][d]x^d)
- pop(coeff, x, degree)[source]
This function allow us to evaluate the composition of two polynomes without for loops :) it takes a matrix tensor of coefficients (coeff), a matrix tensor of points x and range of integer [0,1,…d]
- and return a tensor (coeff[0][0] + coeff[0][1]x +…+ coeff[0][d]x^d,
- coeff[1][0] + coeff[1][1]x +…+ coeff[1][d]x^d)