Show Summary Details
In This Section

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year

IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2015: 0.521
Source Normalized Impact per Paper (SNIP) 2015: 1.233

Mathematical Citation Quotient (MCQ) 2015: 0.39

Open Access
Online
ISSN
2391-5455
See all formats and pricing
In This Section

# An incremental approach to obtaining attribute reduction for dynamic decision systems

Liu Wenjun
• College of Mathematics and Econometrics, Hunan University, Changsha, Hunan 410004, China and Changsha University of Science and Technology, Changsha, Hunan 410004, China
• Email:
Published Online: 2016-11-27 | DOI: https://doi.org/10.1515/math-2016-0077

## Abstract

In the 1960s Professor Hu Guoding proposed a method of measuring information based on the idea that connotation and denotation of a concept satisfies inverse ratio rule. According to this information measure, firstly we put forward the information quantity for information systems and decision systems; then, we discuss the updating mechanism of information quantity for decision systems; finally, we give an attribute reduction algorithm for decision tables with dynamically varying attribute values.

MSC 2010: 03E99

## 1 Introduction

In recent years a major challenge has been created due to increasing data volumes. The prevalence of continuously collected data has led to an increasing interest in the field of data streams. For example, Internet traffic generates large streams that cannot even be stored effectively unless significant resources are spent on storage. As data sets change with time, it is very time-consuming or even infeasible to run a knowledge acquisition algorithm repeatedly. To overcome this deficiency, the researchers have recently proposed many new analytic techniques. These techniques mainly address knowledge updating from three aspects: the expansion of data [17], the increasing number of attributes [811] and the variation of data values [12, 13]. For the first two aspects, a number of incremental techniques have been developed to acquire new knowledge without recomputation. However, little research has been done on the third aspect in knowledge acquisition, which motivates this study. This paper concerns attribute reduction for data sets with dynamically varying data values.

Feature selection, a common technique for data preprocessing in many areas including machine learning, pattern recognition and data mining, has hold great significance. Among various approaches to select useful features, a special theoretical framework is Pawlak’s rough set model [14, 15]. One can use rough set theory to select a subset of features that is most suitable for a given recognition problem [1621]. Rough feature selection is also called attribute reduction, which aims to select those features that keep the discernibility ability of the original ones [2226]. The feature subset generated by an attribute reduction algorithm is called a reduci. In the last two decades, researchers have proposed many reduction algorithms [2732]. However, most of these algorithms can only be applicable to static data sets. In paper [3340], several algorithms have been proposed for dynamic data sets. Here, we continue the research on the attribute reduction algorithm of dynamic data sets.

The remainder of this paper is organized as follows. Some preliminaries about rough set theory are reviewed in Section 2. In Section 3, a new form of conditional information quantity for decision systems is introduced, the properties of this information quantity are discussed. In Section 4, the updating mechanism of information quantity for decision systems are researched. Based on the conditional information quantity, an attribute reduction algorithm for decision systems with dynamically varying attribute values is constructed in Section 5.

## 2 Preliminaries

In this section, we first review some basic concepts in rough set theory, which can also be referred to [14, 15]. Throughout this paper, the universe U is assumed a finite nonempty set.

In rough set theory, knowledge is regarded as the classification ability of objects. Suppose we are given a finite set U ≠ ϕ of objects we are interested in. Any subset XU will be called a concept or a category in U and any family of concepts in U will be referred to as abstract knowledge about U. We will be mainly interested in the concepts which form a partition and often use equivalence relations instead of classifications, since these two concepts are mutually interchangeable and relations are easier to deal with. Suppose R is an equivalence relation over U, then by U/R we mean the family of all equivalence classes of R, and [x]R denotes an equivalence class of R containing the element xU. With each subset XU, we associate two subsets: $R_X=∪{Y∈U/R|Y⊆X},R¯X=∪{Y∈U/R|Y∩X≠ϕ}$ called the R–lower and R–upper approximations of X respectively. When $\underset{_}{R}X=\overline{R}X,$ then X is called R–definable; otherwise X is called R–undefinable.

An information system, as a basic concept in rough set theory, provides a convenient framework for the representation of objects in terms of their attribute values.

An information system is a quadruple I S = (U, A, V, f), where: U is a set of finite and nonempty objects, called the universe; A is a nonempty finite set of attributes; V is the value domain of attributes; f is an information function which assigns particular values from domains of attributes to objects, such as ∀aA, xU, f(a, x)V, where f(a,x) denotes the value of attribute a on object x.

With every subset of attributes BA, there is an associated equivalence relation ind(B) = {(x, y) ∈ U2|∀aB, f(a, x) = f(a, y)}. This equivalence relation ind(B) divides the universe U into a family of disjoint classes, the approximation space determined by the B-equivalence relation, denoted by πB, is defined as: πB = {X | XU / ind(B)}, where X is called a B-equivalence block and depicts the collection of objects that are indiscernible from each other with respect to B.

One type of special information system is called a decision system, which is denoted as DS = (U, C ∪ {d},V,f), where d is the decision attribute, C is the conditional attribute set. The positive region of d with respect to C is defined as $PO{S}_{C}\left(d\right)=\bigcup _{X\in {\pi }_{d}}\underset{_}{C}X.\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}DS=\left(U,C\cup \left\{d\right\},V,f\right)$ is called a consistent decision system, if POSC(d) = U, else it is called an inconsistent decision system.

The consistent degree of adecision system DS = (U, C ∪ d, V, f) is defined as $\gamma =\frac{|PO{S}_{C}\left(d\right)|}{|U|}.$(1)

Obviously, a decision table is consistent if and only if its consistent degree γ is 1.

## 3 The information quantity for information systems and decision systems

In this section, we will use a new form of condition information quantity in decision system based on the equivalence relation. Some properties of the conditional information quantity will be given.

Definition 3.1: Given an information system I S=(U, A, V, f) and P, Q ⊆ A, πP = {P1, P2,..., Ps} is finer than πQ = {Q1,Q2, ...Qt} is defined as: for every Pi ∈ πP, there exists QjπQ, such that PiQj, denotes πPπQ. In this case, we also say that πQ is coarser than πP. If πPπQ and πPπQ, we say πP is strictly finer than πQ, denotes πPπQ.Obviously, if BA, then πAπB.

Definition 3.2: Let I S = (U, A, V, f) be an information system, if πA = {X1, X2,..., Xn}, the information quantity of block Xi is defined as I(Xi) = p(Xi)(1-p(Xi)); the information quantity of πA is defined as $I\left({\pi }_{A}\right)=\sum _{i=1}^{n}p\left({X}_{i}\right)\left(1-p\left({X}_{i}\right)\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}where\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}p\left({X}_{i}\right)=\frac{|{X}_{i}|}{|U|},i=1,2,\cdots ,n.$

Theorem 3.3: Let IS =(U,A,V,f) be an information system, if πA = {X1, X2,..., Xn}, the information quantity of πA satisfies the following properties:

• (1)

$0\le I\left({\pi }_{A}\right)\le 1-\frac{1}{n}.$

• (2)

$I\left({\pi }_{A}\right)=1-\frac{1}{n}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}if\phantom{\rule{thinmathspace}{0ex}}and\phantom{\rule{thinmathspace}{0ex}}only\phantom{\rule{thinmathspace}{0ex}}if\phantom{\rule{thinmathspace}{0ex}}p\left({X}_{i}\right)=\frac{1}{n}\left(i=1,2,\cdots ,n\right).$

• (3)

I(πA) = 0 if and only if πA = {U}.

• (4)

For each Xi, XjπA, I(Xi)+ I(Xj) ≥ I(XiXj).

• (5)

If BA, then I(πB) ≤ I(πA), that is, the finer the partition, the bigger information quantity of it.

(1) $I\left({\pi }_{A}\right)=\sum _{i=1}^{n}p\left({X}_{i}\right)\left(1-p\left({X}_{i}\right)\right)=1-\sum _{i=1}^{n}{p}^{2}\left({X}_{i}\right),\phantom{\rule{thinmathspace}{0ex}}\text{where}\phantom{\rule{thinmathspace}{0ex}}\sum _{i=1}^{n}p\left({X}_{i}\right)=1.$Now, we discuss the extreme value of $\sum _{i=1}^{n}{p}^{2}\left({X}_{i}\right)$ under restrained condition $\sum _{i=1}^{n}p\left({X}_{i}\right)=1,\text{let}\phantom{\rule{thinmathspace}{0ex}}H\left(\lambda \right)=\sum _{i=1}^{n}{p}^{2}\left({X}_{i}\right)+\lambda \left(\sum _{i=1}^{n}p\left({X}_{i}\right)-1\right).$ Since $H′λ(λ)=∑i=1np(Xi)−1=0H′p(Xi)(λ)=2p(Xi)+λ=0$We have $p\left({X}_{i}\right)=\frac{1}{n},$ that is, when $p\left({X}_{i}\right)=\frac{1}{n},\sum _{i=1}^{n}{p}^{2}\left({X}_{i}\right)$ gets to its minimum value $\frac{1}{n},$ so I(πA) gets to its maximum value $1-\frac{1}{n},$ obviously, IA)≥0. So (1) and (2) hold, (3) is obvious.(4) Since ${X}_{i}\cap {X}_{j}=\varphi ,\phantom{\rule{thinmathspace}{0ex}}\text{so}\phantom{\rule{thinmathspace}{0ex}}p\left({X}_{i}\cup {X}_{j}\right)=p\left({X}_{i}\right)+p\left({X}_{j}\right),I\left({X}_{i}\right)+I\left({X}_{j}\right)-I\left({X}_{i}\cup {X}_{j}\right)=p\left({X}_{i}\right)\left(1-p\left({X}_{i}\right)\right)+p\left({X}_{j}\right)\left(1-p\left({X}_{j}\right)\right)-p\left({X}_{i}\cup {X}_{j}\right)\left(1-p\left({X}_{i}\cup {X}_{j}\right)\right)=2p\left({X}_{i}\right)p\left({X}_{j}\right)\ge 0.$(5) If BA, then πAπB, so each equivalence class of πB is made up of one or more equivalence classes of πA. Obviously, we can get πB through combining two equivalence classes of πA each time. From (4), we can get I(πA) ≥ I(πB).

Theorem 3.4: Let I S = (U, A, V, f) be an information system, if X, Y ⊆ U, then I(X) + I(Y) ≥ I(X ∪ Y).

Let Δ = I(X) + I(Y) - I(X ∪ Y), then $Δ=p(X)[1−p(X)]+p(Y)[1−p(Y)]−p(X∪Y)[1−p(X∪Y)]=p(X)+p(Y)−p(X∪Y)+[p(X∪Y)]2−[p(X)]2−[p(Y)]2=p(X∩Y)+[p(X)+p(Y)−p(X∩Y)]2−[p(X)]2−[p(Y)]2=p(X∩Y)+2p(X)p(Y)−2p(X)p(X∩Y)−2p(Y)p(X∩Y)+[p(X∩Y)]2=[p(X∩Y)−p(X)p(X∩Y)−p(Y)p(X∩Y)+p2(X∩Y)]+2p(X)p(Y)−p(X)p(X∩Y)−p(Y)p(X∩Y)=p(X∩Y)[1−p(X)−p(Y)+p(X∩Y)]+p(X)p(Y)−p(X)p(X∩Y)+p(X)p(Y)−p(Y)p(X∩Y)=p(X∩Y)[1−p(X∪Y)]+p(X)[p(Y)−p(X∩Y)]+p(Y)[p(X)−p(X∩Y)]$Since $0\le p\left(X\right)\le 1,0\le p\left(Y\right)\le 1,0\le p\left(X\cup Y\right)\le 1,\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}p\left(X\right)\ge p\left(X\cap Y\right),p\left(Y\right)\ge p\left(X\cap Y\right),$ so Δ ≥ 0, that is I(X) + I(Y) ≥ I(X ∪ Y).This theorem demonstrates that if we combine two blocks, their information quality is lesser.

Definition 3.5: Let DS = (U, C ∪ {d}, V, f), X ⊆ U, πd = {Y1, Y2...Yn, the information quantity of block X with respective to πd is denoted as $I\left({\pi }_{d}|X\right)=p\left(X\right)\sum _{j=1}^{n}p\left({Y}_{j}|X\right)\left(1-p\left({Y}_{j}|X\right)\right).$

Definition 3.6: Let DS = (U, C ∪ {d}, V, f), πC = {Y1, Y2...Yn the condition information quantity of πC with respect to πd is defined as $I\left({\pi }_{d}|{\pi }_{C}\right)=\sum _{i=1}^{m}p\left({X}_{i}\right)\sum _{j=1}^{n}p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right),\phantom{\rule{thinmathspace}{0ex}}where\phantom{\rule{thinmathspace}{0ex}}p\left({X}_{i}\right)=\frac{|{X}_{i}|}{|U|},i=1,2,\cdots ,m;\phantom{\rule{thinmathspace}{0ex}}p\left({Y}_{j}|{X}_{i}\right)=\frac{|{Y}_{j}\cap {X}_{i}|}{|{X}_{i}|},j=1,2,\cdots ,n.$

Theorem 3.7: Let DS = (U, C ∪ {d}, V, f), πC = {Y1, Y2...Yn, the condition information quantity of πC with respect to πd satisfies the following properties:

• (1)

$I\left({\pi }_{d}|{X}_{i}\right)+I\left({\pi }_{d}|{X}_{j}\right)\le I\left({\pi }_{d}|\left({X}_{i}\cup {X}_{j}\right)\right)\left(i,j\in \left\{1,2,\cdots ,m\right\}\right).$

• (2)

$I\left({\pi }_{d}|{X}_{i}\right)+I\left({\pi }_{d}|{X}_{j}\right)=I\left({\pi }_{d}|\left({X}_{i}\cup {X}_{j}\right)\right)\phantom{\rule{thinmathspace}{0ex}}if\phantom{\rule{thinmathspace}{0ex}}and\phantom{\rule{thinmathspace}{0ex}}only\phantom{\rule{thinmathspace}{0ex}}if,\phantom{\rule{thinmathspace}{0ex}}for\phantom{\rule{thinmathspace}{0ex}}each\phantom{\rule{thinmathspace}{0ex}}k\in \left\{1,2,n\right\},\frac{|{Y}_{k}\bigcap {X}_{i}|}{|{X}_{i}|}=\frac{|{Y}_{k}\bigcap {X}_{j}|}{|{X}_{j}|}\left(i,j\in \left\{1,2,\cdots m\right\}\right).$

• (3)

$0\le I\left({\pi }_{d}|{\pi }_{C}\right)\le I\left({\pi }_{d}\right).$

• (4)

$I\left({\pi }_{d}|{\pi }_{C}\right)=0\phantom{\rule{thinmathspace}{0ex}}if\phantom{\rule{thinmathspace}{0ex}}and\phantom{\rule{thinmathspace}{0ex}}only\phantom{\rule{thinmathspace}{0ex}}if\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}\preccurlyeq {\pi }_{d}.$

• (5)

$f\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}=\left\{U\right\},\phantom{\rule{thinmathspace}{0ex}}then\phantom{\rule{thinmathspace}{0ex}}I\left({\pi }_{d}|{\pi }_{C}\right)=I\left({\pi }_{d}\right).$

Proof: (1) $Δ=I(πd|(Xi∪Xj))−(I(πd|Xj)+I(πd|Xj))=p(Xi⋃Xj)∑k=1np(Yk|Xi⋃Xj)(1−p(Yk|Xi⋃Xj))−p(Xi)∑k=1np(Yk|Xi)(1−p(Yk|Xi))−p(Xj)∑k=1np(Yk|Xj)(1−p(Yk|Xj))=|Xi∪Xj||U|∑k=1n|Yk∩Xi|+|Yk∩Xj||Xi∪Xj|(1−|Yk∩Xi|+|Yk∩Xj||Xi∪Xj|)−|Xi||U|∑k=1n|Yk∩Xi||Xi|(1−|Yk∩Xi||Xi|)−|Xj||U|∑k=1n|Yk∩Xj||Xj|(1−|Yk∩Xj||Xj|)$ $=1|U|∑!k=1n((|Yk⋂Xj|+|Yk⋂Xj|)(1−|Yk∩Xi|+|Yk∩Xj||Xi∪Xj|)−|Yk⋂Xi|+|Yk∩Xi|2|Xi|−|Yk⋂Xj|+|Yk∩Xj|2|Xj|)=1|U|∑k=1n(|Yk∩Xi|2|Xi|+|Yk∩Xj|2|Xj|−(|Yk∩Xi|+|Yk∩Xj|)2|Xi∪Xj|).$Let $|{X}_{i}|=x,|{X}_{j}|=y,|{Y}_{k}\cap {X}_{i}|=a,|{Y}_{k}\cap {X}_{j}|=b,$ denotes $fk=|Yk∩Xi|2|Xi|+|Yk∩Xj|2|Xj|−(|Yk∩Xi|+|Yk∩Xj|)2|Xi∪Xj|,$ then $fk=a2x+b2y−(a+b)2x+y=a2y(x+y)+b2x(x+y)−(a+b)2xyxy(x+y)=a2xy+a2y2+b2x2+b2xy−a2xy−b2xy−2abxyxy(x+y)=(ay−bx)2xy(x+y)≥0.$So $I\left({\pi }_{d}|{X}_{i}\right)+I\left({\pi }_{d}|{X}_{j}\right)\le I\left({\pi }_{d}|\left({X}_{i}\cup {X}_{j}\right)\right).$(2) According to the proof of (1), $I\left({\pi }_{d}|{X}_{i}\right)+I\left({\pi }_{d}|{X}_{j}\right)=I\left({\pi }_{d}|\left({X}_{i}\cup {X}_{j}\right)\right)$ if and only if ay = bx, that is, if and only if for each $k\in \left\{1,2,\cdots ,n\right\},\frac{|{Y}_{k}\cap {X}_{i}|}{|{X}_{i}|}=\frac{|{Y}_{k}\cap {X}_{j}|}{|{X}_{j}|},\phantom{\rule{thinmathspace}{0ex}}\text{then}\phantom{\rule{thinmathspace}{0ex}}I\left({\pi }_{d}|{X}_{i}\right)+I\left({\pi }_{d}|{X}_{j}\right)=I\left({\pi }_{d}|\left({X}_{i}\cup {X}_{j}\right)\right).$(3) Obviously, when ${\pi }_{c}\preccurlyeq {\pi }_{d},I\left({\pi }_{d}|{\pi }_{c}\right)=0.$ According to (1), if we combine two condition classes, the information of condition class block with respective to πd is increased, so $I\left({\pi }_{d}|{\pi }_{C}\right)\le I\left({\pi }_{d}|\left\{U\right\}\right)=I\left({\pi }_{d}\right).$(4) If $I\left({\pi }_{d}|{\pi }_{C}\right)=0,\phantom{\rule{thinmathspace}{0ex}}\text{that is}\phantom{\rule{thinmathspace}{0ex}}\sum _{i=1}^{n}p\left({X}_{i}\right)\sum _{j=1}^{m}p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right)=0,\phantom{\rule{thinmathspace}{0ex}}\text{so for every}\phantom{\rule{thinmathspace}{0ex}}{X}_{i}\in {\pi }_{C},$ we have $p\left({X}_{i}\right)\sum _{j=1}^{m}p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right)=0.\phantom{\rule{thinmathspace}{0ex}}\text{Since}\phantom{\rule{thinmathspace}{0ex}}p\left({X}_{i}\right)>0,\phantom{\rule{thinmathspace}{0ex}}\text{so for each}\phantom{\rule{thinmathspace}{0ex}}{Y}_{j}\in {\pi }_{D},p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right)=0,$ that is, for each ${Y}_{j}\in {\pi }_{D},p\left({Y}_{j}|{X}_{i}\right)=0\phantom{\rule{thinmathspace}{0ex}}\text{or}\phantom{\rule{thinmathspace}{0ex}}p\left({Y}_{j}|{X}_{i}\right)=1,\phantom{\rule{thinmathspace}{0ex}}\text{thus}\phantom{\rule{thinmathspace}{0ex}}{X}_{i}\cap {Y}_{j}=\varphi \phantom{\rule{thinmathspace}{0ex}}\text{or}\phantom{\rule{thinmathspace}{0ex}}{X}_{i}\subseteq {Y}_{j}.\phantom{\rule{thinmathspace}{0ex}}\text{i.e.,}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}\subseteq {\pi }_{d}.$ The inverse is obvious.(5) According to definition of $I\left({\pi }_{d}|{\pi }_{C}\right),$ we can easily determine if ${\pi }_{C}=\left\{U\right\},\phantom{\rule{thinmathspace}{0ex}}\text{then}\phantom{\rule{thinmathspace}{0ex}}I\left({\pi }_{d}|{\pi }_{C}\right)=I\left({\pi }_{d}\right).$ We must pay attention that the inverse of (5) doesn’t hold.

Example 3.8: Let DS = (U, C ∪ {d},V, f) be a decision system, U = {u1u2...,u9}, πd = {{u1u4u7},{u2u5u8}, {u3u6u9}} πC = {{u1u2u3}, {u4u5u6}, {u7u8u9}}, obviously $I\left({\pi }_{d}|\left\{U\right\}\right)=I\left({\pi }_{d}|{\pi }_{C}\right)=I\left({\pi }_{d}\right),\phantom{\rule{thinmathspace}{0ex}}\text{but}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}\ne \left\{U\right\}.$

Corollary 3.9: Let DS = (U, C ∪ {d},V, f) be a decision system, if B1 ⊆ B2C then $I\left({\pi }_{d}|{\pi }_{{B}_{2}}\right)\le \left({\pi }_{d}|{\pi }_{{B}_{1}}\right).$

Corollary 3.10: Let DS = (U, C ∪ {d},V, f) be a decision system, C1, C2C, if $I\left({\pi }_{d}|{\pi }_{{C}_{1}}\right)={k}_{1},I\left({\pi }_{d}|{\pi }_{{C}_{2}}\right)={k}_{2},\phantom{\rule{thinmathspace}{0ex}}then\phantom{\rule{thinmathspace}{0ex}}\left(1\right)\phantom{\rule{thinmathspace}{0ex}}I\left({\pi }_{d}|{\pi }_{\left({C}_{1}\cup {C}_{2}\right)}\right)\le min\left({k}_{1},{k}_{2}\right);\left(2\right)I\left({\pi }_{d}|{\pi }_{\left({C}_{1}\cap {C}_{2}\right)}\right)\ge max\left({k}_{1},{k}_{2}\right).$

Proof: Since ${\pi }_{\left({C}_{1}\cup {C}_{2}\right)}\preccurlyeq {\pi }_{{C}_{1}}\preccurlyeq {\pi }_{\left({C}_{1}\cap {C}_{2}\right)};{\pi }_{\left({C}_{1}\cup {C}_{2}\right)}\preccurlyeq {\pi }_{{C}_{2}}\preccurlyeq {\pi }_{\left({C}_{1}\cap {C}_{2}\right)},$ from Corollary 3.9, we have: $I\left({\pi }_{d}|{\pi }_{\left({C}_{1}\cup {C}_{2}\right)}\right)\le min\left({k}_{1},{k}_{2}\right);I\left({\pi }_{d}|{\pi }_{\left({C}_{1}\cap {C}_{2}\right)}\right)\ge max\left({k}_{1},{k}_{2}\right)$

Definition 3.11: Let DS = (U, C ∪ {d},V, f), if XπC and |λ(X)|> 1 then X is said to be an inconsistent equivalence block; otherwise, it is said to be a consistent equivalence block, where $\lambda \left(X\right)=\left\{f\left(u,d\right)|u\in X\right\}$ and $|\lambda \left(X\right)|$ is the cardinality of λ(X).An inconsistent equivalence block describes a group of C -indistinguishable objects that have a divergence in their decision-making, while a consistent equivalence block depicts a collection of C -definable objects that share the same decision-making.

Definition 3.12: Let DS = (U, C ∪ {d},V, f), and X ∈ πC, the inconsistent and consistent block families of πC are denoted by ${\pi }_{C}^{inc}=\left\{X\in {\pi }_{C}||\lambda \left(X\right)|>1\right\},{\pi }_{C}^{con}=\left\{X\in {\pi }_{C}||\lambda \left(X\right)|=1\right\}$ respectively.The inconsistent block family collects all of the inconsistent equivalence blocks from πC, whereas the consistent block family gathers all of the consistent equivalence blocks from πC. It is evident that ${\pi }_{C}^{inc}\cup {\pi }_{C}^{con}={\pi }_{C}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}^{inc}\cap {\pi }_{C}^{con}=\varphi .$

Theorem 3.13: Let $DS=\left(U,C\cup \left\{d\right\},V,f\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{m}\right\},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{d}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{n}\right\},$ XjπC is a consistence block if and only if the information of block Xi with respective to πd is zero, that is, ${X}_{i}\in !{\pi }_{C}^{con}\phantom{\rule{thickmathspace}{0ex}}iff\phantom{\rule{thickmathspace}{0ex}}I\left({\pi }_{d}|{X}_{i}\right)=p\left({X}_{i}\right)\sum _{j=1}^{n}p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right)=0.$

Proof: Xi ∈ πC is a consistent block iff exists Yj ∈ πd, such that Xi ⊆ Yj, for each ${Y}_{k}\in {\pi }_{d}\left(k\ne j\right),{X}_{i}\cap {Y}_{k}=\varphi .$ Whereas ${X}_{i}\subseteq {Y}_{j}\phantom{\rule{thinmathspace}{0ex}}\text{iff}\phantom{\rule{thinmathspace}{0ex}}p\left({Y}_{j}|{X}_{i}\right)=1,\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{X}_{i}\cap {Y}_{k}=\varphi \phantom{\rule{thinmathspace}{0ex}}\text{iff}\phantom{\rule{thinmathspace}{0ex}}p\left({Y}_{j}|{X}_{i}\right)=0.\phantom{\rule{thinmathspace}{0ex}}\text{So}\phantom{\rule{thinmathspace}{0ex}}{X}_{i}\in {\pi }_{C}^{con}\phantom{\rule{thinmathspace}{0ex}}\text{iff}\phantom{\rule{thinmathspace}{0ex}}I\left({\pi }_{d}|{X}_{i}\right)=p\left({X}_{i}\right)\sum _{j=1}^{n}p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right)=0.$

Corollary 3.14: Let $DS=\left(U,C\cup \left\{d\right\},V,f\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{m}\right\},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{d}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{n}\right\},$ then $I\left({\pi }_{d}|{\pi }_{C}\right)=\sum _{{X}_{i}\in {\pi }_{C}^{inc}}p\left({X}_{i}\right)\sum _{j=1}^{n}p\left({Y}_{j}|{X}_{i}\right)\left(1-p\left({Y}_{j}|{X}_{i}\right)\right).$

Corollary 3.15: Let $DS=\left(U,C\cup \left\{d\right\},V,f\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{m}\right\},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{d}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{n}\right\},$ DS is consistent decision system iff $I\left({\pi }_{d}|{\pi }_{C}\right)=0.$

Corollary 3.16: In decision system DS = (U, C ∪ {d},V, f) , aC is called d-dispensable if $I\left({\pi }_{d}|{\pi }_{\left(C-\left\{a\right\}\right)}\right)=I\left({\pi }_{d}|\pi c\right).$

Corollary 3.17: Let DS = (U, C ∪ {d}, V, f) be a consistent decision system, aC is d-dispensable $iff\phantom{\rule{thinmathspace}{0ex}}\mathrm{\forall }X\in {\pi }_{C-\left\{a\right\}},I\left({\pi }_{d}|X\right)=0.$

Corollary 3.18: Let DS = (U, C ∪ {d}, V, f) be a consistent decision system, condition attribute C is indispensable with respect to d iff $\mathrm{\forall }a\in C,\mathrm{\exists }X\in {\pi }_{C-\left\{a\right\}},I\left({\pi }_{d}|X\right)$

Definition 3.19: Let DS = (U, C ∪ {d}, V, f) be a decision system, B⊆C, ∀ aB, the significance measure (inner measure) of a in B is defined as $Si{g}^{inner}\left(a,B,D\right)=I\left({\pi }_{d}|{\pi }_{\left(B-\left\{a\right\}\right)}\right)-I\left({\pi }_{d}|\pi B\right).$

Definition 3.20: Let DS = (U, C ∪ {d}, V, f) be a decision system, B⊆C, ∀ aC-B, the significance measure (outer measure) of a in B is defined as $Si{g}^{outer}\left(a,B,D\right)=I\left({\pi }_{d}|{\pi }_{B}\right)-I\left({\pi }_{d}|{\pi }_{\left(B\cup \left\{a\right\}\right)}\right).$

Theorem 3.21: In decision system DS = (U, C ∪ {d}, V, f) , aC is d-dispensable, then $PO{S}_{C-\left\{a\right\}}\left(d\right)=PO{S}_{c}\left(d\right).$

Proof: On the one hand, according to Theorem 3.7, if we combine two condition classes of decision table, the condition information quantity will increase monotonically, and only if the two condition classes Xi and Xj satisfy $\frac{|{Y}_{k}\cap {X}_{i}|}{|{X}_{i}|}=\frac{|{Y}_{k}\cap {X}_{j}|}{|{X}_{j}|},$ for each Yk ∈ πd, then the condition information quantity doesn’t change, that is, if $\pi c=\left\{{X}_{1},{X}_{2},{X}_{i},{X}_{j},{X}_{n}\right\},\pi B=\left\{{X}_{1},{X}_{2},{X}_{i-1},{X}_{i+1},{X}_{j-1},{X}_{j+1},{X}_{n},{X}_{i}\cup$Xj}, and $I\left({\pi }_{d}|{\pi }_{C}\right)=I\left({\pi }_{d}|{\pi }_{B}\right)$, then for each Ykπd we have $\frac{|{Y}_{k}\cap {X}_{i}|}{|{X}_{i}|}=\frac{|{Y}_{k}\cap {X}_{j}|}{|{X}_{j}|},$ that is POSB(d)= POSC(d).On the other hand, since πCπC–{a}, then πC–{a} can be obtained through combining the classes of πC. According to the analysis of the above, if $I\left({\pi }_{d}|{\pi }_{C}\right)=I\left({\pi }_{d}|{\pi }_{C-\left\{a\right\}}\right)$, we must have POSC(d)= POS(C–{a})(d).

Definition 3.22: Let DS = (U, C ∪ {d} V, f) be a decision system, BC is a relative reduct of C relative to decision attributed d, if(1) $I\left({\pi }_{d}|{\pi }_{C}\right)=I\left({\pi }_{d}|{\pi }_{B}\right)$(2) $\mathrm{\forall }{B}^{\prime }\subset B,I\left({\pi }_{d}|{\pi }_{{B}^{\prime }}\right)\ne I\left({\pi }_{d}|{\pi }_{B}\right)$

Theorem 3.23: Let DS = (U, C ∪ {d} V, f) be a consistent decision system, then BC is a relative reduct of C relative to decision attributed d if and only if (1) $PO{S}_{C}\left(d\right)=PO{S}_{B}\left(d\right);\left(2\right)\mathrm{\forall }{B}^{\prime }\subset B,PO{S}_{{B}^{\prime }}\left(d\right)\ne PO{S}_{B}\left(d\right)$.

## 4 Updating mechanism of information quantity for decision systems

Given a dynamic decision table, based on the information quantity, this section presents the updating mechanisms of the information quantity for dynamically varying data values.

Theorem 4.1: Let DS = (U, C ∪ {d} V, f) be a decision system, πd = {Y1,Y2,...,Yq,Yn, if xX(|X|> 1), xYq(q ∈ {1,2,...,n}), when x exits out of this system, the new information quantity of block X $I(πd′|X′)=|U||X|(|U|−1)(|X|−1)I(πd|X)−2(|Xp|−1)(|U|−1)(|Xp|−|Yq∩Xp).$where ${X}^{\prime }=X-\left\{x\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1;}{Y}_{2;}\cdots ;{Y}_{q}^{\prime }={Y}_{q}-\left\{x{\right\}}_{;}\cdots ;{Y}_{n}\right\}.$

Proof: $I(πd′|X′)=P(X′)∑j=1nP(Yj|X′)(1−P(Yj|X′))=P(X′)∑j=1,j≠q′nP(Yj|X′)(1−P(Yj|X′))+P(X′)P(Yq′|X′)(1−P(Yq′|X′))=|X|−1|U|−1∑j=1,j≠q′|Yj∩X||X|−1(1−|Yj∩X||X|−1)+|X|−1|U|−1|Yq∩X|−1|X|−1(1−|Yq∩X|−1|X|−1)=|X||U|(|X|−1)(|U|−1)⋅|X||U|∑j=1,j≠q′|Yj∩X||X|(1−|Yj∩X||X|1−1|X|)+|X||U|(|X|−1)(|U|−1)⋅|X||U|[|Yq∩X||X|⋅(1−|Yq∩X||X|)−|X|−|Yq∩X||X|2]=|X||U|(|X|−1)(|U|−1)⋅|X||U|∑j=1,j≠q′|Yj∩X||X|(1−|Yj∩X||X|)−|X||U|(|X|−1)(|U|−1)⋅|X||U|∑j=1,j≠q′|Yj∩X||X|2+|X||U|(|X|−1)(|U|−1)⋅|X||U||Yq∩X||X|.(1−|Yq∩X||X|)$ $−1(|X|−1)(|U|−1)(|X|−|Yq∩X|)=|U||U|−1[|X|(|X|−1)|X||U|∑j=1,j≠q′|Yj∩X||X|(1−|Yj∩X||X|)−1(|X|−1)|U|(∑j=1n|Yj∩X|−|Yq∩X|)+|X||U|⋅|Yq∩X||X|(1−|Yq∩X||X|)+1|X|−1⋅|X||U|⋅|Yq∩X||X|(1−|Yq∩X||X|)−1(|X|−1)|U|(|X|−|Yq∩X=|U||U|−1[|X||X|−1|X||U|∑j=1,j≠q|Yj∩X||X|(1−|Yj∩X||X|)+|X||X|−1⋅|X||U|⋅|Yq∩X||X|(1−|Yq∩X||X|)−2(|X|−1)|U|(|X|−|Yq∩X=|U||U|−1⋅|X||X|−1I(πd|X)−2(|X|−1)|U|(|X|−|Yq∩X|)$

Theorem 4.2: Let DS = (U,C ∪ {d}, V, f) be a decision system, πd = {Y1,Y2,... ,Yq,Yn}, if xX, xYq, (q ∈ {1,2,... ,n}), when x exits out of this system, the new information quantity of block X $I(πd′|X)=|U||U|−1I(πd|X).$where ${\pi }_{d}^{\prime }=\left\{{Y}_{1;}{Y}_{2;}\cdots ;{Y}_{q}^{\prime }={Y}_{q}-\left\{x{\right\}}_{;}\cdots ;{Y}_{n}\right\}.$

Proof: $I(πd′|X)=|X||U|−1∑j=1nP(Yj|X)(1−P(Yj|X))=|U||U|−1⋅|X||U|∑j=1nP(Yj|X)(1−P(Yj|X))=|U||U|−1I(πd|X).$

Theorem 4.3: Let DS = (U,C ∪ {d}, V,f) be a decision system, πC = {X1,X2,... ,Xm}, πd = {Y1,Y2,... ,Yn} if $x\in {X}_{p}\left(|{X}_{p}|>1\right),x\in {Y}_{q}\left(p\in \left\{{1}_{;}{2}_{;}\cdots ;m\right\};q\in \left\{{1}_{;}{2}_{;}\cdots n\right\}\right)$ when x exits out of this system, the new information quantity $I(πd′|πC′)=|U||U|−1[I(πd|πC)+1|Xp|−1I(πd|Xp)−2(|Xp|−1)|U|(|Xp|−|Yq∩Xp$where ${\pi }_{C}^{\prime }=\left\{{X}_{1;}{X}_{2;}\cdots ;{X}_{p}^{\prime }={X}_{p}-\left\{x{\right\}}_{;}\cdots ;{Y}_{n}\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1;}{Y}_{2;}\cdots ;{Y}_{q}^{\prime }={Y}_{q}-\left\{x{\right\}}_{;}\cdots ;{Y}_{n}\right\}.$Proof. According to Theorem 4.1 and Theorem 4.2, we have $I(πd′|πC′)=∑i=1mI(πd′|Xi)=∑i=1,i≠p′mI(πd′|Xi)+I(πd′|Xp′))=|U||U|−1∑i=1,i≠pmI(πd|Xi)+|U||Xp|(|U|−1)(|Xp|−1)I(πd|Xp)$ $−2(|Xp|−1)(|U|−1)(|Xp|−|Yq∩Xp=|U||U|−1∑i=1mI(πd|Xi)−|U||U|−1I(πd|Xp)+|U||Xp|(|U|−1)(|Xp|−1)I(πd|Xp)−2(|Xp|−1)(|U|−1)(|Xp|−|Yq∩Xp|)=|U||U|−1I(πd|πC)+|U|(|U|−1)(|Xp|−1)I(πd|Xp)−2(|Xp|−1)(|U|−1)(|Xp|−|Yq∩Xp|)=|U||U|−1[I(πd|πC)+1|Xp|−1I(πd|Xp)−2(|Xp|−1)|U|(|Xp|−|Yq∩Xp|)]$

Corollary 4.4: Let DS = (U,C ∪ {d}, V, f) be a decision system, πC = {X1,X2,...Xm}, πd = {Y1,Y2,...Yn}, if xXp (p ∈ {1,2,... ,m} and ${X}_{p}\in {\pi }_{C}^{con}$, when x exits out of this system, then $I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)=\frac{|U|}{|U|-1}I\left({\pi }_{d}|{\pi }_{C}\right)$.

Theorem 4.5: Let DS = (U,C ∪ {d}, V, f) be a decision system, πd = {Y1,Y2,... ,Yn}, when x goes into this system, if x adds to X and Yq(q ∈ {1,2,... ,n}) then $I(πd′|X′)=|U||X|(|U|+1)(|X|+1)I(πd|X)+2(|X|+1)(|U|+1)(|X|−|Yq∩X$where ${X}^{\prime }=X\cup \left\{x\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1;}{Y}_{2;}\cdots ;{Y}_{q}^{\prime }={Y}_{q}\cup \left\{x{\right\}}_{;}\cdots ;{Y}_{n}\right\}.$

Proof: $I(πd′|X′)=P(X′)∑j=1nP(Yj|X′)(1−P(Yj|X))=P(X′)∑j=1,j≠q′nP(Yj|X′)(1−P(Yj|X′))+P(X′)P(Yq′|X′)(1−P(Yq′|X′))=|X|+1|U|+1∑j=1,j≠q′|Yj∩X||X|+1(1−|Yj∩X||X|+1)+|X|+1|U|+1|Yq∩X|+1|X|+1(1−|Yq∩X|+1|X|+1)=|X||U|(|X|+1)(|U|+1)⋅|X||U|∑j=1jj≠q′|Yj∩X||X|(1−|Yj∩X||X|+1|X|)+|X||U|(|X|+1)(|U|+1)⋅|X||U|[|Yq∩X||X|⋅(1−|Yq∩X||X|)+|X|−|Yq∩X||X|2]=|X||U|(|X|+1)(|U|+1)⋅|X||U|∑j=1,j≠q′|Yj∩X||X|(1−|Yj∩X||X|)$ $+|X||U|(|X|+1)(|U|+1)⋅|X||U|∑j=1,j≠q′|Yj∩X||X|2+|X||U|(|X|+1)(|U|+1)⋅|X||U||Yq∩X||X|.(1−|Yq∩X||X|)+1(|X|−1)(|U|−1)(|X|−|Yq∩X|)=|U||U|+1[|X|(|X|+1)|X||U|∑j=1,j≠q′|Yj∩X||X|(1−|Yj∩X||X|)+1(|X|+1)|U|(∑j=1n|Yj∩X|−|Yq∩X|)+|X||U|⋅|Yq∩X||X|(1−|Yq∩X||X|)+1|X|−1⋅|X||U|⋅|Yq∩X||X|(1−|Yq∩X||X|)+1(|X|+1)|U|(|X|−|Yq∩X=|U||U|+1[|X||X|+1|X||U|∑j=1,j≠q|Yj∩X||X|(1−|Yj∩X||X|)+|X||X|+1⋅|X||U|⋅|Yq∩X||X|(1−|Yq∩X||X|)+2(|X|+1)|U|(|X|−|Yq∩X=|U||U|+1⋅|X||X|+1I(πd|X)+2(|X|+1)|U|(|X|−|Yq∩X|)$

Theorem 4.6: Let DS = (U, C ∪ {d}, V, f) be a decision system, πC = {X1,X2,... ,Xm}, πd = {Y1,Y2,... ,Yn}, when x adds to this system, then $I(πd′|πC′)=|U||U|+1[I(πd|πC)−1|Xp|+1I(πd|Xp)+2(|Xp|+1)|U|(|Xp|−|Yq∩Xp$where ${\pi }_{C}^{\prime }=\left\{{X}_{1;}{X}_{2;}\cdots ;{X}_{p}^{\prime }={X}_{p}\cup \left\{x{\right\}}_{;}\cdots ;{X}_{m}\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1;}{Y}_{2;}\cdots ;{Y}_{q}^{\prime }={Y}_{q}\cup \left\{x{\right\}}_{;}\cdots ;{Y}_{n}\right\}.$

Proof: According to Theorem 4.5, we have $I(πd′|πC′)=∑i=1mI(πd′m|Xi)=∑i=1,i≠p′mI(πd′|Xi)+I(πd′|Xp′))=|U||U|+1∑i=1,i≠p′mI(πd|Xi)+|U||Xp|(|U|+1)(|Xp|+1)I(πd|Xp)+2(|Xp|+1)(|U|+1)(|Xp|−|Yq∩Xp=|U||U|+1∑i=1mI(πd|Xi)−|U||U|+1I(πd|Xp)+|U||Xp|(|U|+1)(|Xp|+1)I(πd|Xp)+2(|Xp|+1)(|U|+1)(|Xp|−|Yq∩Xp|)$ $=|U||U|+1I(πd|πC)−|U|(|U|+1)(|Xp|+1)I(πd|Xp)+2(|Xp|+1)(|U|+1)(|Xp|−|Yq∩Xp|)=|U||U|+1[I(πd|πC)−1|Xp|+1I(πd|Xp)+2(|Xp|+1)|U|(|Xp|−|Yq∩Xp$

Corollary 4.7: Let DS = (U, C ∪ {d}, V, f) be a decision system, πC = {X1,X2,... ,Xm}, πd = {Y1,Y2,... ,Yn}, when x goes into this system, if xXp (p ∈ {1, 2,... ,m} and ${X}_{p}\in {\pi }_{C}^{con}$, then $I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)=\frac{|U|}{|U|+1}I\left({\pi }_{d}|{\pi }_{C}\right)$.In the following, we discuss how the information quantity is changed if attribute values of one object x are varied.

Theorem 4.8: Let DS = (U, C ∪ {d}, V, f) be a decision system, ${\pi }_{C}=\left\{{X}_{1;}{X}_{2;}\cdots ;{X}_{p1;}\cdots ;{X}_{p2;}\cdots ;{X}_{m}\right\},{\pi }_{d}=\left\{{Y}_{1;}{Y}_{2;}{Y}_{q1;}\cdots ;{Y}_{q2;}\cdots ;{Y}_{n}\right\}.x\in {X}_{p1},x\in {Y}_{q1}\left(p1\in \left\{{1}_{;}{2}_{;}\cdots ;m\right\};q1\in \left\{{1}_{;}{2}_{;}\cdots ,n\right\}\right).$ If the object x is changed to x′, and in the new decision system, ${\pi }_{C}^{\prime }=\left\{{X}_{{1}_{\mathrm{D}}}{X}_{2;}{X}_{p1}^{\prime }={X}_{p1}-\left\{x{\right\}}_{;}\cdots ;{X}_{p2}^{\prime }={X}_{p2}\cup \left\{{x}^{\prime }{\right\}}_{;}{X}_{m}\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1;}{Y}_{2;}\cdots ;{Y}_{q1}^{\prime }={Y}_{q1}-\left\{x{\right\}}_{;}{Y}_{q2}^{\prime }={Y}_{q2}\cup \left\{{x}^{\prime }{\right\}}_{;}{Y}_{n}\right\}$, then $I(πd′|πC′)=I(πd|πc)+1(|Xp1|−1)|U|I(πd|Xp1)−2(|Xp1|−1)(|U|)(|Xp1|−|Yq1∩Xp1|)−1|Xp2|+1)|U|I(πd|Xp2)+2(|Xp2|+1)(|U|)(|Xp2|−|Yq2∩Xp2$

Proof: When x changes to x′, and ${\pi }_{C}=\left\{{X}_{1},{X}_{2},\cdots !,{X}_{p1},\cdots ,{X}_{p2},\cdots ,{X}_{m}\right\},{\pi }_{d}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1},\cdots ,{Y}_{q2},\cdots ,{Y}_{n}\right\}$ turn into ${\pi }_{C}^{\prime }=\left\{{X}_{1},{X}_{2},\cdots {X}_{p1}^{\prime }={X}_{p1}-\left\{x\right\},\cdots ,{X}_{p2}^{\prime }={X}_{p2}\cup \left\{{x}^{\prime }\right\},\cdots ,{X}_{m}\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1}^{\prime }={Y}_{q1}-\left\{x\right\},\cdots ,{Y}_{q2}^{\prime }={Y}_{q2}\cup \left\{{x}^{\prime }\right\},\cdots ,{Y}_{n}\right\}.$This process can be divided into two steps: first, ${\pi }_{C}=\left\{{X}_{1},{X}_{2},\cdots !,{X}_{p1},\cdots ,{X}_{p2},\cdots ,{X}_{m}\right\},{\pi }_{d}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1},...,{Y}_{q2},...,{Y}_{n}\right\};$ turn into ${\pi }_{C}^{″}=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{p1}^{\prime }={X}_{p1}-\left\{x\right\},\cdots ,{X}_{p2},\cdots ,{X}_{m}\right\},{\pi }_{d}^{″}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1}^{\prime }={Y}_{q1}-\left\{x\right\},\cdots ,{Y}_{q2},\cdots ,{Y}_{n}\right\}$ then ${\pi }_{C}^{″}=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{p1}^{\prime },\cdots ,{X}_{p2},\cdots ,{X}_{m}\right\},{\pi }_{d}^{″}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1}^{\prime },\cdots ,{Y}_{q2},\cdots ,{Y}_{n}\right\}$ turn into ${\pi }_{C}^{\prime }=\left\{{X}_{1},{X}_{2},\cdots {X}_{p1}^{\prime },\cdots ,{X}_{p2}^{\prime }={X}_{p2}\cup \left\{{x}^{\prime }\right\},\cdots ,{X}_{m}\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1}^{\prime },\cdots ,{Y}_{q2}^{\prime }={Y}_{q2}\cup \left\{{x}^{\prime }\right\},\cdots ,{Y}_{n}\right\}$According to Theorem 4.6, we have $I(πd″|πC″)=|U||U|−1[I(πd|πC)+1|Xp1|−1I(πd|Xp1)−2(|Xp1|−1)|U|(|Xp1|−|Yq1∩Xp1I(πd′|πC′)=|U|−1(|U|−1)+1[I(πd″|πC″)−1|Xp2|+1I(πd″|Xp2)+2(|Xp2|+1)(|U|−1)(|Xp2|−|Yq2∩Xp2$Since $I\left({\pi }_{d}^{″}|{X}_{p2}\right)=\frac{|U|}{\left(|U|-1\right)}I\left({\pi }_{d}|{X}_{p2}\right)$.Thus, $I(πd′|πC′)=I(πd|πC)+1(|Xp1|−1)|U|I(πd|Xp1)+2(|Xp1|+1)(|U|)(|Xp1|−|Yq1∩Xp1|)−2(|Xp2|+1)(|U|)I(πd|Xp2)+2(|Xp2|+1)(|U|)(|Xp2|−|Yq2∩Xp2|).$

Corollary 4.9: Let DS = (U, C ∪ {d}, V, f) be a decision system, ${\pi }_{C}=\left\{{X}_{1,}{X}_{2,}\cdots ,{X}_{p1,}\cdots ,{X}_{p2,}\cdots ,{X}_{m}\right\},{\pi }_{d}=\left\{{Y}_{1,}{Y}_{2,}\cdots ,{Y}_{q1,}\cdots ,{Y}_{q2,}\cdots ,{Y}_{n}\right\}$. When the object x is changed to ${x}^{\prime },x\in {X}_{p1},{x}^{\prime }\in {X}_{p2}$ if ${X}_{p2}\in \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{C}^{\mathrm{co}n},\phantom{\rule{thinmathspace}{0ex}}{X}_{p2\phantom{\rule{thinmathspace}{0ex}}}\in \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\pi }_{{C}^{con}}^{\prime },$ then $I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)=I\left({\pi }_{d}|{\pi }_{C}\right)$

## 5 Attribute reduction algorithm for decision systems with dynamically varying attribute values

Based on the updating mechanisms of the information quantity, this section introduces an attribute reduction algorithm based on information quantity for decision systems with dynamically varying attribute values. In rough set theory, core is the intersection of all reducts of a given table, and core attributes are considered as the indispensable attributes in a reduct. First, we give an algorithm to obtain the core of a dynamically decision system.

Input: A decision system DS = (U, C ∪ {d}, V, f) and object xU is changed to x′.

Output: Core attribute $Cor{e}_{{U}_{{x}^{\prime }}}$, on ${U}_{{x}^{\prime }}$ where ${U}_{{x}^{\prime }}$ expresses the updated objects with xU changed to x′, and $Cor{e}_{{U}_{{x}^{\prime }}}$, is the core of decision system $DS=\left({U}_{{x}^{\prime },}C\cup \left\{d{\right\}}_{,}{V}_{,}f\right)$.

Step 1 When x is changed to x′, compute $\pi c=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{p1},\cdots ,{X}_{p2},\cdots ,{X}_{m}\right\},{\pi }_{d}=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1},\cdots ,{Y}_{q2},{Y}_{n}\right\}$ and ${\pi }_{C}^{\prime }=\left\{{X}_{1},{X}_{2},\cdots ,{X}_{p1}-\left\{x{\right\}}_{,}\cdots ,{X}_{p2}\cup \left\{{x}^{\prime }\right\},\cdots ,{X}_{m}\right\},{\pi }_{d}^{\prime }=\left\{{Y}_{1},{Y}_{2},\cdots ,{Y}_{q1}-\left\{x\right\},\cdots ,{Y}_{q2}\cup \left\{{x}^{\prime }\right\},{Y}_{n}\right\}$

Step 2 compute $I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)$

Step 3 $Cor{e}_{{U}_{{x}^{\prime }}}=\varphi$, for each aC

(1) compute ${\pi }_{C-\left\{a\right\}}=\left\{{Z}_{1},{Z}_{2},\cdots ,{Z}_{t1},\cdots ,{Z}_{t2,}\cdots ,{Z}_{s}\right\}$ and ${\pi }_{C-\left\{a\right\}}^{\prime }=\left\{{Z}_{1},{Z}_{2},\cdots ,{Z}_{t1}-\left\{x\right\},\cdots ,{Z}_{t2}\cup \left\{{x}^{\prime }\right\},\cdots ,{Z}_{s}\right\}$

(2) compute $I\left({\pi }_{d}^{\prime }|{\pi }_{C-\left\{a\right\}}^{\prime }\right)$

(3) If $I\left({\pi }_{d}^{\prime }|{\pi }_{C-\left\{a\right\}}^{\prime }\right)\ne I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)$ , then $Cor{e}_{{u}_{x}},=cor{e}_{{U}_{\mathrm{x}}},\cup \left\{a\right\}$

Step 4 Return $Cor{e}_{{U}_{{x}^{\prime }}}$.

Based on updating mechanisms of the information quantity, an attribute reduction algorithm for decision systems with dynamically varying attribute values is proposed in the following. In this algorithm, the existing reduction result is one of inputs, which is used to find its new reduct after data changes.

Input: A decision system DS = (U, C ∪ {d}, V,f), reduct REDU on U, and the changed object x which is changed to x′.

Output: Attribute reduct $RE{D}_{{U}_{{x}^{\prime }}}$, on ${U}_{{x}^{\prime }}.$

Step 1 $B=Cor{e}_{{U}_{{x}^{\prime }}}.$

Step 2 Compute $I\left({\pi }_{d}^{\prime }|{\pi }_{B}\right)=I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)$ then $RE{D}_{{U}_{{x}^{\prime }}}=B,$ turn to step 4; else turn to step 3.

Step 3 While $I\left({\pi }_{d}^{\prime }|\pi B\right)\ne I\left({\pi }_{d}^{\prime }|{\pi }_{C}^{\prime }\right)$ do

{For each aCB, computer $si{g}_{{U}_{{x}^{\prime }}}^{outer}\left(a,B,d\right);$

Select ${a}_{0}=max\left\{si{g}_{{U}_{{x}^{\prime }}}^{outer}\left(a,b,d\right)\right\},\phantom{\rule{thinmathspace}{0ex}}a\phantom{\rule{thinmathspace}{0ex}}\in C-B;$

$B←B∪{a0}.$}

Step 4 For each aB do

{compute si $si{g}_{{U}_{{x}^{\prime }}}^{outer}\left(a,B,d\right);$

If $si{g}_{{U}_{{x}^{\prime }}}^{inner}\left(a,B,d\right)=0$, then $B←B-\left\{a\right\}.\right\};$

Step 5 $RE{D}_{{U}_{{x}^{\prime }}}=B,$ return $RE{D}_{{U}_{{x}^{\prime }}}$ and end.

In this algorithm, firstly we obtain the core of dynamic information system; then, we add the attribute with the biggest significance gradually till the $RE{D}_{{U}_{{x}^{\prime }}}$

## 6 Conclusion

The incremental technique is an effective way to maintain knowledge in the dynamic environment. An attribute selection for dynamic data sets is still a challenging issue in the field of artificial intelligence. In this paper, we put forward the information quantity for information systems and decision systems according to the information measure proposed by Professor Hu Guoding, and we also discuss the updating mechanism of information quantity for decision systems. Further, we give an attribute reduction algorithm for decision tables with dynamically varying attribute values. It should be pointed out that updating mechanisms of the information quantity introduced in this paper are only applicable when data are varied one by one, whereas many real data may vary in groups in application. This gives rise to many difficulties for the proposed feature selection algorithm to deal with. In our further work, we will focus on improving the incremental algorithm for updating knowledge by varying some objects simultaneously. Furthermore, as an decision system consists of the objects, the attributes, and the domain of attributes values, all of the elements in the decision system will change as time goes by under the dynamic environment. In the future, the variation of attributes and the domain of attributes values in decision system will also be taken into consideration in terms of incremental updating knowledge.

## References

• [1]

Hu F., Wang G.Y, Huang H., Wu Y., Incremental attribute reduction based on elementary sets, in: Proceedings of the 10th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing, Regina, Canada, 2005, 185-193.

• [2]

Liang J.Y, Wei W., Qian YH., An incremental approach to computation of a core based on conditional entropy, Chinese Journal of System Engineering Theory and Practice, 2008, 4, 81-89.

• [3]

Liu D., Li TR., Ruan D., Zou W.L., An incremental approach for inducing knowledge from dynamic information systems, Fundamenta Informaticae, 2009, 94, 245-260.

• [4]

Orlowska M., Maintenance of knowledge in dynamic information systems, in: R. Slowinski (Ed.), Proceeding of the Intelligent Decision Support, Handbook of Applications and Advances of the Rough Set Theory Kluwer Academic Publishers, Dordrecht, 1992, 315-330.

• [5]

Shan L., Ziarko W., Data-based acquisition and incremental modification of classification rules, Computational Intelligence, 1995, 11, 357-370.

• [6]

Yang M., An incremental updating algorithm for attributes reduction based on the improved discernibility matrix, Chinese Journal of Computers, 2007,30, 815-822.

• [7]

Zheng Z., Wang G., RRIA: a rough set and rule tree based incremental knowledge acquisition algorithm, Fundamenta Informati- cae, 2004, 59, 299-313.

• [8]

Chan C.C., A rough set approach to attribute generalization in data mining, Information Science, 1998,107, 169-176.

• [9]

Li TR., Ruan D., Geert W., et al., A rough sets based characteristic relation approach for dynamic attribute generalization in data mining, Knowledge-Based System, 2007, 20, 485-494.

• [10]

Cheng Y., The incremental method for fast computing the rough fuzzy approximations, Data & Knowledge Engineering, 2011, 70, 84-100.

• [11]

Liu D., Zhang J.B., Li TR., A probabilistic rough set approach for incremental learning knowledge on the change of attribute, in: Proceedings 2010 International Conference on Foundations and Applications of Computational Intelligence, 2010, 722-727.

• [12]

Chen H.M., Li T.R., Oiao S.J., et al., A rough set based dynamic maintenance approach for approximations in coarsening and refining attribute values, International Journal of Intelligent Systems, 2010, 25, 1005-1026.

• [13]

Liu D., Li T.R., Liu G.R., et al., An incremental approach for inducing interesting knowledge based on the change of attribute values, in: Proceedings 2009 IEEE lnternational Conference on Granular Computing, Nanchang, China, 2009, 415-418.

• [14]

Pawlak Z., Rough sets, International Journal of Computer and Information Sciences,, 1982, 11, 341-356.

• [15]

Pawlak Z., Rough Sets: Theoretical Aspects of Reasoning About Data, KIuwer Academic Publishers, Dordrecht & Boston, 1991.

• [16]

Xu W., Li Y., Liao X., Approaches to attribute reductions based on rough set and matrix computation in inconsistent ordered information systems, Knowledge-Based Systems, 2012, 27, 78-91.

• [17]

Slowinski R., Vanderpooten D., A generalized definition of rough approximations based on similarity, IEEE Transactions on Knowledge and Data Engineering, 2000,12, 331-336.

• [18]

Stefanowski J., Tsoukias A., Incomplete information tables and rough classification, Computational Intelligence, 2001, 17, 545- 566.

• [19]

Kryszkiewicz M., Rough set approach to incomplete information systems, Information Sciences,1998, 112, 39-49.

• [20]

Kryszkiewicz M., Rules in incomplete information systems, Information Sciences, 1999, 113, 271-292.

• [21]

Dai J., Xu O., Approximations and uncertainty measures in incomplete information systems, Information Sciences, 2012, 198, 62-80.

• [22]

Shannon C., A mathematical theory of communication, The Bell System Technical Journal,, 1948, 27, 379-423.

• [23]

Peng H., Long F., Ding C., Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min- redundancy, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27, 1226-1238.

• [24]

Ouinlan J., Induction on decision trees, Machine Learning, 1986, 1, 81-106.

• [25]

Beaubouef T., Petry F., Arora G., Information-theoretic measures of uncertainty for rough sets and rough relational databases, Information Sciences,, 1998, 109, 535-563.

• [26]

Dai J., Wang W., Xu O., Tian H., Uncertainty measurement for interval-valued decision systems based on extended conditional entropy, Knowledge-Based Systems, 2012, 27, 443-450.

• [27]

Liang J., Chin K., Dang C., Richard C., A new method for measuring uncertainty and fuzziness in rough set theory, International Journal of General Systems, 2002, 31, 331-342.

• [28]

Oian Y., Liang J., Combination entropy and combination granulation in incomplete information system, LNAI, 2006, 4046, 184-190.

• [29]

Oian Y., Liang J., Wang F., A new method for measuring the uncertainty in incomplete information systems, International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, 2009, 17, 855-880.

• [30]

Dai J. H., Wang W. T., Tian H. W., Liu L., Attribute selection based on a new conditional entropy for incomplete decision systems, Knowledge-Bsed Systems, 2013, 39, 207-213

• [31]

Liu Z. H., Liu S. Y., Wang J., An attribute reduction algorithm based on the information quantity, Journal of Xidian University,, 2003, 30, 835-838

• [32]

Dash M., Liu H., Consistency-based search in feature selection, Artificial Intelligence,, 2003, 151, 155-176.

• [33]

Yang M., An incremental updating algorithm of the computation of a core based on the improved discernibility matrix, Chinese Journal of Computers, 2006, 29, 407-413.

• [34]

Fan Y.N., Tseng T.L., Chern C.C., Huang C.C., Rule induction based on an incremetnal rough set, Expert Systems with Applications, 2009, 36, 11439-11450.

• [35]

Dey P., Dey S., Datta S., Sil J., Dynamic discreduction using rough sets, Applied Soft Computing, 2011, 11, 3887-3897.

• [36]

Wang F., Liang J.Y., Oian Y.H., Attribute reduction: a dimension incremental strategy, Knowledge-Based Systems, 2013, 39, 95- 108.

• [37]

Wang F., Liang J.Y., Dang C.Y., Attribute reduction for dynamic data sets, Applied Soft Computing, 2013, 13, 676-689.

• [38]

Huang C.C., Tseng T.L., Fan Y.N., Hsu C.H., Alternative rule induction methods based on incremental object using rough set theory, Applied Soft Computing, 2013, 13, 372-389.

• [39]

Liu D., Li T.R., Ruan D., Zhang J.B., Incremental learning optimization on knowledge discovery in dynamic business intelligent systems, Journal of Global optimization, 2011, 51, 325-344.

• [40]

Hu F., Wang G.Y., Huang H., Wu Y., Incremental attribute reduction based on elementary sets, in: Proceedings of 10th International Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Lecture Notes in Computer Science, Regina, Canada, 2005, 185-193.

Accepted: 2016-08-25

Published Online: 2016-11-27

Published in Print: 2016-01-01

Citation Information: Open Mathematics, ISSN (Online) 2391-5455, Export Citation