Show Summary Details
More options …

# Open Physics

### formerly Central European Journal of Physics

Editor-in-Chief: Seidel, Sally

Managing Editor: Lesna-Szreter, Paulina

IMPACT FACTOR 2018: 1.005

CiteScore 2018: 1.01

SCImago Journal Rank (SJR) 2018: 0.237
Source Normalized Impact per Paper (SNIP) 2018: 0.541

ICV 2017: 162.45

Open Access
Online
ISSN
2391-5471
See all formats and pricing
More options …
Volume 16, Issue 1

# Ontology learning algorithm using weak functions

Linli Zhu
/ Gang Hua
• Corresponding author
• School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
• Department of Natural Sciences and Humanities, University of Engineering and Technology, Lahore 54000 Lahore, Pakistan
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
Published Online: 2018-12-31 | DOI: https://doi.org/10.1515/phys-2018-0112

## Abstract

Ontology is widely used in information retrieval, image processing and other various disciplines. This article discusses how to use machine learning approach to solve the most essential similarity calculation problem in multi-dividing ontology setting. The ontology function is regarded as a combination of several weak ontology functions, and the optimal ontology function is obtained by an iterative algorithm. In addition, the performance of the algorithm is analyzed from a theoretical point of view by statistical methods, and several results are obtained.

PACS: 05.10.-a; 02.50.-r

## 1 Introduction

As a structured concept representation model, the ontology has been applied to the field of artificial intelligence since its inception, and then applied to other areas of the computer, such as machine vision, parallel computing, information query extension, mathematical logic representation, and so on. In the past decade, as a useful tool, the research and application of ontology has expanded to the entire engineering science. Related applications of ontology models can be found in various fields such as chemistry, biology, pharmaceutical, material, medicine, neuroscience, and social sciences. In each special application field, a large number of professional ontology are constructed every year and applied to specific practices (for instance, “GO” ontology in gene science and “PO” ontology in plant science). More related ontology work and engineering application can be referred to Biletskiy et al. [1], Benedikt et al. [2], Rajbabu et al. [3], Vidal et al. [4], Annane et al. [5], Adhikari et al. [6], Mili et al. [7], Ferreira et al. [8], Bayoudhi et al. [9], and Derguech et al. [10].

The core of various ontology algorithms is the similarity calculation between ontology concepts. For ontology mapping, the essence is to calculate the similarity of concepts between different ontology, so the ontology similarity calculation algorithm we designed is also applicable to ontology mapping. Some related studies on ontology mapping can be referred to Ding and Foo [11], Kalfoglou and Schorlemmer [12], Currie et al. [13], Wong et al. [14], Qazvinian et al. [15], Nagy and Vargas-Vera [16], Lukasiewicz et al. [17], Arch-int and Arch-int [18], Forsati and Shamsfard [19], and Sicilia et al. [20].

In recent years, as the amount of data processed by various types of applications has expanded the data stored and processed by the ontology has also been expanding. This has led to an increase in the requirements of ontology algorithms in the era of big data. Especially in biology and pharmacy where the ontology is responsible for handling large amounts of information. In order to meet the needs of practical engineering, learning algorithms are gradually applied to ontology similarity calculation and ontology mapping, and then applied to various subject areas. Several ontology learning algorithms and application to different engineering application can be found in Gao and Zhu [21], and Gao et al. [22], [23], [24], [25], and [27].

Several papers contributed to the theoretical analysis of machine learning based ontology algorithms. Gao et al. [28] studied the strong and weak stability of k-partite ranking based ontology algorithm. Gao and Xu [29] considered the uniform stability of learning algorithms for ontology similarity computation. Gao and Farahani [30] presented the generalization bounds and uniform bounds for multi-dividing ontology algorithms with convex ontology loss function. Gao et al. [31] proposed a partial multi–dividing ontology algorithm with the aim of obtaining an efficient trick to optimize the partial multi–dividing ontology learning framework, and several theoretical results from a statistical learning theory perspective are obtained.

In this paper, we continue to study the theoretical analysis of ontology learning algorithm and focus on the multi–dividing ontology algorithm. The rest of the paper is organised as follows. First, we introduce the setting of multi–dividing ontology learning algorithm, and some notations are introduced. Then we manifest the main algorithm which is based on weak ontology functions. Finally, we give some results and detailed proofs from the perspective of statistical learning theory.

## 2 Setting

We use graph G = (V, E) to express the structure of the ontology and call it ontology graph in which each vertex represents a concept and each edge indicates a direct relationship (for example, belonging between two concepts). Assume S : V × V → R+ ∪ {0} is the similarity function in ontology, and we usually unitize its value to [0,1]. That is to say, similarity function S : V × V → [0, 1] maps each pair of vertices (concepts) to a real number belonging to interval from 0 to 1. Let v1 and v2 be two vertices in the ontology graph, S(v1, v2) = 1 indicates that the concepts corresponding to v1 and v2 have the same meaning. Conversely, S(v1, v2) = 0 means that there is no relationship between v1 and v2. Fixed threshold M ∈ [0, 1] with the help of field experts, then for vertex v, we return a set of concepts {v|S(v, v)≥ M } to the user as similarity vertices. In what follows, we always assume that n is the number of ontology samples, and it’s called sample capacity.

Let S = {v1, · · · , vn} be the ontology sample set which is independent identically distributed according to an unknown distribution D(we write vi D for i ∈ {1, · · · , n}), and l be the ontology loss function (we always assume it is convex, and can be express as l(f , v) with respect to ontology function f : V → R and ontology sample v). The expected risk of ontology model is

$er(f)=Ev∼Dl(f,v).$

However, er(f) can’t be computed since we don’t know D. Instead, it is naturally to obtain optimal ontology vector via the ontology empirical framework as follows

$er^S(f)=1n∑i=1nl(f,vi).$

When it comes to ontology learning setting, we aim to learn an ontology function f : V → R which maps each vertex to a real number. The similarity between ontology vertices v1 and v2 can be measured by means of |f (v1)− f(v2)|: the bigger of value |f (v1)− f (v2)| is, the smaller similarity between v1 and v2 will be; the smaller of value | f (v1)− f (v2)| is, the larger similarity between v1 and v2 will become. In order to connect the statistic learning theory, all the information for a vertex v is package to a p-dimensional vector. To simplify expression, without confusion,we use v to express vertex, its corresponding vector and its corresponding ontology concept, and this mathematical symbol is no longer bolded in the following context.

Suppose that (vi , yi) are independent and identically distributed random variables to certain unknown distribution D,where yi Y is the label of ontology vertex vi. Fixed the ontology function f , denote l(f , vi , yi) as the ontology loss, and then the expected ontology risk can be stated as

$er(f)=∫V×Yl(f,vi,yi)D(dvi,dyi).$

Given ontology training sample set {(vi , yi)}ni=1, then the corresponding empirical ontology risk can be expressed as

$er^(f)=1n∑i=1nl(f,vi,yi).$

When it comes to the pairwise setting, we also assume that (vi , yi) and (vj , yj) are independent and identically distributed random variables to certain unknown distribution D, where yi , yj Y are the label of ontology vertex vi and vj. Let l(f , vi , vj , yi,j) be the ontology loss function, where yi,j can be regarded as the function of yi and yj. Thus, the expected ontology risk becomes

$er(f)=∫(V×Y)2l(f,vi,vj,yi,j)D(dvi,dyi)D(dvj,dyj).$

With the ontology sample set {(vi , yi)}ni=1, the corresponding empirical ontology risk in the pairwise setting can be denoted by

$er^(f)=2n(n−1)∑i=1n∑j=i+1nl(f,vi,vj,yi,j).$

## 2.1 Multi-dividing ontology setting

Since most ontology graph structures are trees (no cycle graph), multi-dividing ontology learning trick is popular in recent years. In this special ontology learning setting, all the vertices in the ontology graph are divided into k parts for k levels. The values of different levels are determined by domain experts in the specific application. For ontology function f , what we want is that the real number corresponding to the vertex in level a is greater than the real number corresponding to the vertex in any level b, where 1 ≤ a < bk. That is to say, in the ideal case, f (v) > f (v) if the level of vertex v is smaller than the level of vertex v.

Formally, the learner is inferred to an ontology training set S = (S1, S2, · · · , Sk) ∈ Vn1 × Vn2 ×· ··× Vnk which consists of a sequence of ontology training samples Sa = $\left({v}_{1}^{a},\cdots ,{v}_{{n}_{a}}^{a}\right)\in {V}^{{n}_{a}}\left(1\le a\le k\right).$By virtue of ontology sample S, a real-valued ontology function f : V → R is learned which allocates the future S a vertices larger value than Sb, where a < b. Set Da as the conditional distributions for each rate 1 ≤ ak and $n=\sum _{i=1}^{k}{n}_{i}$as the total size of ontology sample set, where ni = |Si| for i ∈ {1, · · · , k}.

The expected multi-dividing ontology expected risk on the ontology graph for an ontology function f : V → R associated with the ontology function f : V → R is defined as

$er(f)=∑a=1k−1∑b=a+1kEv∼Da,v′∼Db{l(f,v,v′))}.$

The other expression for expected ontology risk can be formulated by

$er(f)=∑a=1k−1∑b=a+1k∫Va×Vbl(f,va,vb)Da(dva)Db(dvb).$

A large class of learning algorithms is generated by regularization schemes. They penalize an empirical error which is chosen here to be the multi-dividing empirical error on the ontology graph defined for a f : V → R associated with the sample S as

$er^S,l(f)=∑a=1k−1∑b=a+1k1nanb∑i=1na∑j=1nbl(f,vi,vj).$

Thus, the optimal ontology function can be obtained by $f\ast =\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}{\stackrel{^}{er}}_{\mathcal{T},l}\left(f\right).$We simply write ${\stackrel{^}{er}}_{S,l}\left(f\right)\phantom{\rule{thinmathspace}{0ex}}as\phantom{\rule{thinmathspace}{0ex}}\stackrel{^}{er}\left(f\right).$

The aim of this paper is to provide a new ontology learning algorithm in terms of weak ontology functions and give the theoretical analysis of in the multi-dividing setting.

## 3 New ontology learning algorithm and theoretical analysis

In this section, we first present our new ontology learning algorithm in multi-dividing setting based on weak ontology functions. Then, the theoretical analysis of the proposed ontology algorithm is derived.

## 3.1 New ontology learning algorithm with weak ontology functions

Assume that the whole procedure can be separated by weak ontology functions in which it will be produced in each round. The new ontology learning algorithm keeps a distribution Dt over V×V which is passed on round t to the weak ontology function. In fact, it selects Dt to emphasize different parts of the ontology training samples, and the large weight assigned to a pair of ontology vertices implies that it is very important for the weak ontology functions to map their order correctly.

We assume that the weak ontology functions keep the form ft : V → R, and it provides order information in the same fashion as final ontology function. In the normal ontology setting (S = {v1, · · · , vn} and not to be divided into k parts), the procedure can be stated as follows:

Step 1: Given initial distribution D over V × V and set D1 = D;

Step 2: For t = 1,··· , T, do the following actions: train weak ontology function by means of distribution Dt; obtain weak ontology function ft : V → R; select parameter αt R; calculate

$Dt+1(v1,v2)=Dt(v1,v2)eαt(ft(v1)−ft(v2))Zt,$

where Z t is denoted as regularization parameter and thus Dt+1 will be determined;

Step 3: Return the final ontology function as the combination of weak ontology functions:

$f(v)=∑t=1Tαtft(v).$

In the step 2 above, one problem is how to get parameter αt. One method is to minimize Z t, i.e.,

$αt=arg⁡min⁡{Zt}=arg⁡min⁡∑v1,v2Dt(v1,v2)eαt(ft(v1)−ft(v2)).$

In the multi-dividing ontology setting, the above algorithm can be rewritten as follows.

Initialize: For each pair of (a, b) with 1 ≤ a < bk, and $v\in {S}^{a}\cup {S}^{b},\phantom{\rule{thinmathspace}{0ex}}set\phantom{\rule{thinmathspace}{0ex}}{\rho }_{1}^{a,b}\left(v\right)=\frac{1}{{n}_{a}}\phantom{\rule{thinmathspace}{0ex}}if\phantom{\rule{thinmathspace}{0ex}}v\in {S}^{a}\phantom{\rule{thinmathspace}{0ex}}and\phantom{\rule{thinmathspace}{0ex}}{\rho }^{a,b}\left(v\right)=\frac{1}{{n}_{b}}$if v ∈ Sb.

For t = 1, · · · , T:

• train the weak ontology function in terms of Dt (if v1 ∈ Sa and v2 ∈ Sb,then ${\mathcal{D}}_{t}\left({v}_{1},{v}_{2}\right)={\rho }_{t}^{a,b}\left({v}_{1}\right){\rho }_{t}^{a,b}\left({v}_{2}\right)$) and obtain weak ontology function ft : V → R;

• for each pair of (a, b) with 1 ≤ a < bk, select ${\alpha }_{t}^{a,b}\in \mathbb{R}$and update

$ρt+1a,b(v)=ρta,b(v)e−αta,bft(v)∑v′∈Saρta,b(v′)e−αta,bft(v′)$

if v ∈ Sa and

$ρt+1a,b(v)=ρta,b(v)eαta,bft(v)∑v′∈Sbρta,b(v′)e−αta,bft(v′)$

if v ∈ Sb.

Select balance parameter αt for each weak ontology function, and return the final ontology function $f\left(v\right)=\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left(v\right).$

## 3.2 Theoretical results

To ensure that the calculations for each iteration are valid, it must be confirmed that ${\mathcal{D}}_{t}\left({v}_{1},{v}_{2}\right)={\rho }_{t}^{a,b}\left({v}_{1}\right){\rho }_{t}^{a,b}\left({v}_{2}\right)$is true at each step (here v1 ∈ Sa and v2 ∈ Sb). To show this, we use mathematical induction, assume that this is true for round t, and when it comes to round t + 1, according to the produces, we have

$Dt(v1,v2)=ρta,b(v)e−αta,bft(v)∑v′∈Saρta,b(v′)e−αta,bft(v′)×ρta,b(v)eαta,bft(v)∑v′∈Sbρta,b(v′)e−αta,bft(v′)=ρta,b(v1)ρta,b(v2).$

Note that our final ontology function has the form $f\left(v\right)=\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left(v\right),$and we can set Θ : V × V → {−1, 0, 1} as

$Θ(v1,v2)=sign(∑t=1Tαtft(v1)−∑t=1Tαtft(v2)).$

That is to say, if $\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}_{1}\right)-\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}_{2}\right)>0$then Θ(v1, v2) = 1; if $\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}_{1}\right)=\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}_{2}\right),$then Θ(v1, v2) = 0; and if$\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}_{1}\right)-\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}_{2}\right)<0$then Θ(v1, v2) = −1. For each pair of (a, b) with 1 ≤ a < bk, if function Θ(va, vb) ≠1 where va ∈ Sa and vb ∈ Sb, then it implies the error by the multi-dividing rule. Thus, the generalization error (expect risk) of Θ in multi-dividing ontology setting is denoted as

$Δ(Θ)=∑a=1k−1∑b=a+1kPv∼Da,v′∼Db{Θ(v,v′)≠1}=∑a=1k−1∑b=a+1kEDa,DbI(Θ(v,v′)≠1),$

where I is the truth function, i.e., I(x) = 1 if x is true, otherwise I(x) = 0. Given the ontology training set S = (S1, S2, · · · , Sk) ∈ Vn1 × Vn2 × ··· × Vnk which consists of a sequence of ontology training samples ${S}_{a}=\left({v}_{1}^{a},\cdots ,{v}_{{n}_{a}}^{a}\right)\in {V}^{{n}_{a}}\phantom{\rule{thinmathspace}{0ex}}\left(1\le a\le k\right),$the expected empirical error of Θ can be denoted as

$Δ^(Θ)=∑a=1k−1∑b=a+1k1nanb∑i=1na∑j=1nbI(Θ(via,vjb)≠1).$

The results presented in our paper aim to show that the difference between$\stackrel{^}{\mathrm{\Delta }}\left(\mathrm{\Theta }\right)\phantom{\rule{thinmathspace}{0ex}}and\phantom{\rule{thinmathspace}{0ex}}\mathrm{\Delta }\left(\mathrm{\Theta }\right)$is small with large possibility. Setting Γ as the function space for functions Θ we have the following theorem.

Theorem 1 Suppose all the weak ontology functions belong to function space F with a finite VC dimension K, the ontology functions f (as the weighted combinations of the weak ontology functions) belong to function space F. Let S = (S1, S2, · · · , Sk) ∈ Vn1 × Vn2 × ·· ·× Vnk be ontology training set which consists of a sequence of ontology training samples ${S}^{a}=\left({v}_{1}^{a},\cdots ,{v}_{{n}_{a}}^{a}\right)\in {V}^{{n}_{a}}$and Sa Da (1 ≤ ak). We have with probability at least 1 − δ (0 < δ < 1), the following inequality holds for any f ∈ F:

$|er(f)−er^(f)|≤2∑a=1k−1∑b=a+1k{K′(log⁡2naK′+1)+log⁡18δna+K′(log⁡2nbK′+1)+log⁡18δnb},$

where K = 2(K + 1)(T + 1) log2(e(T + 1)), T is the number of weak ontology functions in ontology algorithm and e is the base of the natural logarithm.

Proof of Theorem 1 First, we show that for each pair of (a, b) with 1 ≤ a < bk, and each δ > 0, there is a small number ε satisfying

$P{∃Θ∈Γ:|∑a=1k−1∑b=a+1k1nanb∑i=1na∑j=1nbI(Θ(via−vjb)≠1)$$−∑a=1k−1∑b=a+1kEva,vbI(Θ(va,vb)≠1)|>ε}≤δ,$

where the value of ε will be determined later.

Define $\mathrm{\Xi }:V×V\to \left\{0,1\right\}\phantom{\rule{thinmathspace}{0ex}}as\phantom{\rule{thinmathspace}{0ex}}\mathrm{\Xi }\left({v}^{a},{v}^{b}\right)=\mathrm{I}\left(\mathrm{\Theta }\left({v}^{a},{v}^{b}\right)\ne 1\right)$. Clearly, $\mathrm{\Xi }$indicates whether Θ makes mistake or not for the ontology vertices pair (va, vb) for va ∈ Sa and vb ∈ Sb according to the multi-dividing rule. We infer

$∑a=1k−1∑b=a+1k1nanb∑i=1na∑j=1nbI(Θ(via−vjb)≠1)$$−∑a=1k−1∑b=a+1kEva,vbI(Θ(va,vb)≠1)=∑a=1k−1∑b=a+1k{1nanb∑i=1na∑j=1nbΞ(via,vjb)−1na∑i=1naEvb{Ξ(via,vb)}+1na∑i=1naEvb{Ξ(via,vb)}−Eva,vb{Ξ(vi,vj)}}=∑a=1k−1∑b=a+1k{1na∑i=1na(1nb∑j=1nbΞ(via,vjb)−EvbΞ(via,vb))+Evb{1na∑i=1naΞ(via,vb)−EvaΞ(va,vb)}}.$

Obviously, it is enough to show that there exist ε1 and ε2 with ε1 + ε2 = ε such that (∃va ∈ V a in each pair of (a, b) for (1) and ∃vb ∈ Vb in each pair of (a, b) for (2))

$P{∃Ξ∈Υ:|∑a=1k−1∑b=a+1k1nb∑j=1nbΞ(va,vjb)−∑a=1k−1∑b=a+1kEvbΞ(va,vb)|≥ε1}≤δ2,$(1)

and

$P{∃Ξ∈Υ:|∑a=1k−1∑b=a+1k1na∑i=1naΞ(via,vb)−∑a=1k−1∑b=a+1kEvaΞ(va,vb)|≥ε2}≤δ2$(2)

respectively, where Υ is the function space of Ξ.

Now, we only prove (2) in light of standard results, and (1) can be yielded in the same fashion. Let Υvb be the set of all such functions Ξ for a given vb, then the selection of Ξ in (2) is from function space vbΥvb . In view of theorem of Vapnik [32] which provides a selection of ε2 in (2) relying on the size na of Sa for each pair of (a, b), complexity K of vbΥvb (considered as VC Dimension), and the possibility δ. Specifically, for any δ > 0, set

$ε3=2∑a=1k−1∑b=a+1kK′(log⁡2naK′+1)+log⁡18δna,$

we have

$P{∃Ξ∈∪vbΥvb:|∑a=1k−1∑b=a+1k1na∑i=1naΞ(via,vb)−∑a=1k−1∑b=a+1kEvaΞ(va,vb)|≥ε3}≤δ.$

Next, we need to determine the VC Dimension of vbΥvb : K. For a given vb ∈ Vb, we obtain

$Ξ(va,vb)=I(Θ(va,vb)≠1)=I(sign(∑t=1Tαtft(va)−∑t=1Tαtft(vb))≠1)=I(∑t=1Tαtft(va)−∑t=1Tαtft(vb)≥0)=I(∑t=1Tαtft(va)−c)≥0)$

where $c=\sum _{t=1}^{T}{\alpha }_{t}{f}_{t}\left({v}^{b}\right)$is a constant since vb is given. It reveals that the functions in the space vbΥvb are the subset of all potential thresholds of all the linear combination of T ontologyweak functions. Using the standard result on VC Dimension of weak functions, we yield that if the ontology weak function space has VC Dimension K bigger than two, then K can’t exceed to 2(K + 1)(T + 1) log2(e(T + 1)).

Therefore, we get the desired conclusion.

According to Theorem 1 above, the generalization bound converges to zero at a rate of $O\left(\sum _{a=1}^{k-1}\sum _{b=a+1}^{k}max\left\{\sqrt{\frac{\mathrm{log}\left({n}_{a}\right)}{{n}_{a}}},\sqrt{\frac{\mathrm{log}\left({n}_{b}\right)}{{n}_{b}}}\right\}\right).$

For each pair of (a, b) with 1 ≤ a < bk, the shatter coefficient is denoted as ra,b(F, na, nb) (see Gao and Wang [33] for more details). Then we deduce the following result. Theorem 2 Let F be the real valued ontology function space on V, then with probability at least 1 − δ (0 < δ < 1), for any f ∈ F, we have

$|er(f)−er^(f)|≤∑a=1k−1∑b=a+1k8(na+nb)(log⁡4δ+log⁡ra,b(F,2na,2nb))nanb.$

Theorem 2 implies that if the ontology function is a linear function in the one-dimensional function space, then for each pair of (a, b) with 1 ≤ a < bk, ra,b(F, na, nb) are constants, regardless of the values of na and nb, and thus the bound converges to zero at a rate of $O\left(\sum _{a=1}^{k-1}\sum _{b=a+1}^{k}max\left\{\frac{1}{\sqrt{{n}_{a}}},\frac{1}{\sqrt{{n}_{b}}}\right\}\right).$It further reveals that the bound yield in Theorem 2 is sharper than bound obtained in Theorem 1. However, if the ontology function is a linear function in the d-dimensional function space (where d ≥ 2), then ra,b(F, na, nb) with order O((nanb)d), and in this case the bound in Theorem 2 has convergence rate relying on VC dimension, i.e., still $O\left(\sum _{a=1}^{k-1}\sum _{b=a+1}^{k}max\left\{\sqrt{\frac{\mathrm{log}\left({n}_{a}\right)}{{n}_{a}}},\sqrt{\frac{\mathrm{log}\left({n}_{b}\right)}{{n}_{b}}}\right\}\right).$

## 4 Conclusion

Multi-dividing ontology learning algorithms have been proved to be effective in biology science, plant science, robot structure analysis, etc. It is necessary to give a deep theoretical analysis of this kind of algorithm. In this paper, we give a new ontology learning algorithm based on weak ontology functions, and we discuss the generation bound in this special setting. The obtained ontology algorithm and theoretical conclusions have potential engineering use in various fields.

## Acknowledgement

We thank the reviewers for their constructive comments in improving the quality of this paper. This work was supported in part by the National Natural Science Foundation of China (51574232), the Open Research Fund by Jiangsu Key Laboratory of Recycling and Reuse Technology for Mechanical and Electronic Products (RRME-KF1612), the Industry-Academia Cooperation Innovation Fund Project of Jiangsu Province (BY2016030-06) and Six Talent Peaks Project in Jiangsu Province (2016-XYDXXJS-020).

## References

• [1]

Biletskiy Y., Brown J.A., Ranganathan G.R., Bagheri E., Akbari I., Building a business domain meta-ontology for information preprocessing, Inform. Process. Lett., 2018, 138, 81–88.

• [2]

Benedikt M., Grau B.C., Kostylev E.V., Logical foundations of information disclosure in ontology-based data integration, Artif. Int., 2018, 262, 52–95.

• [3]

Rajbabu K., Srinivas H., Sudha S., Industrial information extraction through multi-phase classification using ontology for unstructured documents, Comput. Ind., 2018, 2018, 137–147.

• [4]

Vidal J.C., Rabelo T., Lama M., Amorim R., Ontology-based approach for the validation and conformance testing of xAPI events, Know. Based Syst., 2018, 155, 22–34.

• [5]

Annane A., Bellahsene Z., Azouaou F., Jonquet C., Building an effective and efficient background knowledge resource to enhance ontology matching, J. Web Semant., 2018, 51, 51–68.

• [6]

Adhikari A., Dutta B., Dutta A., Mondal D., Singh S., An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology, J. Assoc. Inf. Syst. Tech., 2018, 69, 1023–1034. Google Scholar

• [7]

Mili H., Valtchev P., Szathmary L., Boubaker A., Leshob A., Charif Y., Martin L., Ontology-based model-driven development of a destination management portal: Experience and lessons learned, Software Pract. Exper., 2018, 48, 1438–1460.

• [8]

Ferreira W., Baldassarre M.T., Soares S., Codex: A metamodel ontology to guide the execution of coding experiments, Comput. Stand. Inter., 2018, 59, 35–44.

• [9]

Bayoudhi L., Sassi N., Jaziri W., How to repair inconsistency in OWL 2 DL ontology versions?, Data Knowl. Eng., 2018, 116, 138–158.

• [10]

Derguech W., Bhiri S., Curry E., Using ontologies for business capability modelling: describing what services and processes achieve, Comput. J., 2018, 61, 1075–1098.

• [11]

Ding Y., Foo S., Ontology research and development. Part 2–a review of ontology mapping and evolving, J. Inform. Sci., 2002, 28, 375–388. Google Scholar

• [12]

Kalfoglou Y., Schorlemmer M., Ontology mapping: the state of the art, Knowl. Eng. Rev., 2003, 18, 1–31.

• [13]

Currie R.A., Bombail V., Oliver J.D., Moore D.J., Lim F.L., Gwilliam V., Kimber I., Chipman K., Moggs J.G., Orphanides G., Gene ontology mapping as an unbiased method for identifying molecular pathways and processes affected by toxicant exposure: Application to acute effects caused by the rodent non-genotoxic carcinogen diethylhexylphthalate, Toxicol. Sci., 2005, 86, 453–469.

• [14]

Wong A.K.Y., Ray P., Parameswaran N., Strassner J., Ontology mapping for the interoperability problem in network management, IEEE J. Sel. Area. Comm., 2005, 23, 2058–2068.

• [15]

Qazvinian V., Abolhassani H., Haeri S.H., Hariri B.B., Evolutionary coincidence-based ontology mapping extraction, Expert Syst., 2008, 25, 221–236.

• [16]

Nagy M., Vargas-Vera M., Multiagent ontology mapping framework for the semantic web, IEEE T. Syst. Man. Cy. A, 2011, 41, 693–704.

• [17]

Lukasiewicz T., Predoiu L., Stuckenschmidt H., Tightly integrated probabilistic description logic programs for representing ontology mappings, Ann. Math. Artif. Intel., 2011, 63, 385–425.

• [18]

Arch-int N., Arch-int S., Semantic ontology mapping for interoperability of learning resource systems using a rule-based reasoning approach, Expert Syst. Appl., 2013, 40, 7428–7443.

• [19]

Forsati R., Shamsfard M., Symbiosis of evolutionary and combinatorial ontology mapping approaches, Inform. Sciences, 2016, 342, 53–80.

• [20]

Sicilia A., Nemirovski G., Nolle A., Map-On: A web-based editor for visual ontology mapping, Semantic Web, 2017, 8, 969–980.

• [21]

Gao W., Zhu L.L., Gradient learning algorithms for ontology computing, Comput. Intell. Neurosci., 2014, http://dx.doi.org/10.1155/2014/438291

• [22]

Gao W., Zhu L.L., Guo Y.,Wang K.Y., Ontology learning algorithm for similarity measuring and ontology mapping using linear programming, J. Intell. Fuzzy Syst., 2017, 33, 3153–3163.

• [23]

Gao W., Zhu L.L., Wang K.Y., Ontology sparse vector learning algorithm for ontology similarity measuring and ontology mapping via ADAL technology, Int. J. Bifurcat. Chaos, 2015, 25, DOI: 10.1142/S0218127415400349.

• [24]

Gao W., Farahani M.R., Aslam A., Hosamani S., Distance learning techniques for ontology similarity measuring and ontology mapping, Cluster Comput., 2017, 20, 959–968.

• [25]

Gao W., Baig A.Q., Ali H., Sajjad W., Farahani M.R., Margin based ontology sparse vector learning algorithm and applied in biology science, Saudi J. Biol. Sci., 2017, 24, 132–138.

• [26]

Gao W., Zhu L.L., Wang K.Y., Ranking based ontology scheming using eigenpair computation, J. Intell. Fuzzy Syst., 2016, 4,2411–2419.

• [27]

Gao W., Guo Y., Wang K.Y., Ontology algorithm using singular value decomposition and applied in multidisciplinary, Cluster Comput., 2016, 19, 2201–2210.

• [28]

Gao W., Gao Y., Zhang Y.G., Strong and weak stability of k-partite ranking algorithms, Information, 2012, 15, 4585-4590. Google Scholar

• [29]

Gao W., Xu T.W., Stability analysis of learning algorithms for ontology similarity computation, Abstr. Appl. Anal., 2013, http://dx.doi.org/10.1155/2013/174802 Web of Science

• [30]

Gao W., Farahani M.R., Generalization bounds and uniform bounds for multi-dividing ontology algorithms with convex ontology loss function, Comput. J., 2017, 60, 1289–1299.

• [31]

Gao W., Guirao J.L.G., Basavanagoud B., Wu J.Z., Partial multi-dividing ontology learning algorithm, Inform. Sciences, 2018, 467, 35–58. Google Scholar

• [32]

Vapnik V.N., Estimation of Dependences Based on Empirical Data. Springer–Verlag, 1982. Google Scholar

• [33]

Gao W., Wang W.F., Analysis of k-partite ranking algorithm in area under the receiver operating characteristic curve criterion, Int. J. Comput. Math., 2018, 95, 1527–1547.

## About the article

Accepted: 2018-11-11

Published Online: 2018-12-31

Conflict of InterestConflict of Interests The authors hereby declare that there is no conflict of interests regarding the publication of this paper.

Citation Information: Open Physics, Volume 16, Issue 1, Pages 910–916, ISSN (Online) 2391-5471,

Export Citation