Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2017: 0.32

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

# Disjointed sum of products by a novel technique of orthogonalizing ORing

Yavuz Can
• Corresponding author
• Institute for Electronics Engineering, Friedrich-Alexander-University Erlangen- Nuremberg, Cauerstr. 9, 91058, Erlangen, Germany
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
Published Online: 2018-04-26 | DOI: https://doi.org/10.1515/math-2018-0038

## Abstract

This work presents a novel combining method called ‘orthogonalizing ORing $◯\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\vee$’ which enables the building of the union of two conjunctions whereby the result consists of disjointed conjunctions. The advantage of this novel technique is that the results are already presented in an orthogonal form which has a significant advantage for further calculations as the Boolean Differential Calculus. By orthogonalizing ORing two calculation steps - building the disjunction and the subsequent orthogonalization of two conjunctions - are performed in one step. Postulates, axioms and rules for this linking technique are also defined which have to be considered getting correct results. Additionally, a novel equation, based on orthogonalizing ORing, is set up for orthogonalization of every Boolean function of disjunctive form. Thus, disjointed Sum of Products can be easily calculated in a mathematical way by this equation.

MSC 2010: 03B99; 03G05; 03G25; 94C10

## 1 Introduction

A Boolean function or a switching function, respectively, is defined as a mapping f(x) : BnB with B = {0, 1}. It can be expressed by using Boolean variables xi = {x1, x2, …, xn} [1, 3, 10, 14, 20, 21]. As shown in Table 1, Boolean variables which are either direct xn or negated n are connected by operations like conjunction (∧, or no operation sign), disjunction (∨), antivalence (⊕) and equivalence (⊙).

Table 1

Boolean operations of two variables

There are four standard forms fS(x) of switching function [1, 6, 19] which are either connected by conjunctions $\begin{array}{}{c}_{i}\left(\underset{_}{x}\right)=\underset{j=1}{\overset{n}{\bigwedge }}{x}_{j}={x}_{1}\cdot \dots \cdot {x}_{n-1}\cdot \end{array}$ xn or by disjunctions $\begin{array}{}{d}_{i}\left(\underset{_}{x}\right)=\underset{j=1}{\overset{n}{\bigvee }}{x}_{j}={x}_{1}\end{array}$ ∨ … ∨ xn−1xn [3]. Conjunctions are connected by disjunctions in the disjunctive form DF (1) or by antivalence-operations in the antivalence form AF (3) and disjunctions are connected by conjunctions in the conjunctive form CF (2) or by equivalence-operations in the equivalence form EF (4). The connection of canonical conjunctions or disjunctions is named as normal form: disjunctive normal form DNF, antivalence normal form ANF, conjunctive normal form CNF and equivalence normal form ENF. Therefore, the orthogonal form of a DF can also be named as disjointed Sum of Products (dSOP) which is the set of products terms (conjunctions) whose disjunction equals f(x) in non-orthogonal form, such that no two product terms cover the same 1. Consequently, dSOP consist of non intersecting cubes. Deriving an orthogonal form of DF is a classical problem of Boolean theory. In this work, this problem is supported by a contribution of a novel solution.

#### Definition 1.1

(Standard Forms with m ∈ ℕ ∖ {1}).

$1.DisjunctiveForm:fDF(x_)=⋁i=1mci(x_).$(1)

$2.ConjunctiveForm:fCF(x_)=⋀i=1mdi(x_).$(2)

$3.AntivalenceForm:fAF(x_)=⨁i=1mci(x_).$(3)

$4.EquivalenceForm:fEF(x_)=⨀i=1mdi(x_).$(4)

## 2 Characteristic of orthogonality

The characteristic of orthogonality is a special attribute of a switching function [1, 5, 6, 15, 16, 17, 18]. The orthogonal form of a switching function is characterized by conjunctions or disjunctions which are disjointed to one another in pairs. This means, that for each pair of conjunctions, one of them contains a direct Boolean variable (xi) and the other contains the negation (i) of the same Boolean variable. Consequently, the intersection of each pair of these conjunctions (ci,j(x)) results in 0, as shown in Eq. (5). In contrast, the disjunction of each pair of orthogonal disjunctions (di,j(x)) results in 1, as shown in Eq. (6).

#### Definition 2.1

(Orthogonality of Conjunctions or Disjunctions). Two conjunctions ci(x) and cj(x) are orthogonal to each other if the following applies:

$ci(x_)∧cj(x_)=0i≠j.$(5)

Two disjunctions di(x) and dj(x) are orthogonal to each other if the following applies:

$di(x_)∨dj(x_)=1i≠j.$(6)

The orthogonal form of a fS(x) enables its transformation in another form, which will have equivalent function values. This means, that the native form and the transformed form have the same function values if the same input values are used in each case. Therefore, orthogonalization simplifies the handling for further calculation steps, especially in the application of electrical engineering, e.g. for further calculation step as the Boolean Differential Calculus (BDC) [2, 3] by which all possible test patterns for a combinational circuit can be determined. Test patterns are used to detect feasible faults in combinational circuits. Additionally, it facilitates the calculation of BDC particularly in Ternary-Vector-List (TVL) arithmetic [11, 12, 16, 17, 18]. TVL is a kind of matrix which simplifies the computational representation of Boolean functions and their computational handling of tasks in a facilitated way.

#### Theorem 2.2

(Orthogonality of DF and AF). If the intersection of every two conjunctions ci,j(x) of a given DF is 0 then DForth = AForth applies. That means, an orthogonal disjunctive form DForth is equivalent to the orthogonal antivalence form AForth including the same conjunctions [1, 3, 10, 14, 15, 21]:

$⋁k=1mck=⨁k=1mck.$(7)

#### Proof of Theorem 2.2

By using (5) for orthogonal conjunctions ci,j(x) the relation in (7) applies. The respective proof is brought by the following relation. The disjunction of two conjunctions ci,j(x) on the right side is equivalent to the antivalence operation of the same two conjunctions on the right side. This is the procedure of reshaping of a disjunctive form in the antivalence form. For the case that both conjunctions are orthogonal the last term on the right side results in 0. An antivalence-operation with 0 is to be neglected, as xi ⊕ 0 = xi applies. Consequently, this leads to the relation in (7).

$ci(x_)∨cj(x_)=ci(x_)⊕cj(x_)⊕(ci(x_)∧cj(x_))⏟=0.$(8) □

#### Theorem 2.3

(Orthogonality of CF and EF). If the union of every two disjunctions di,j(x) of a given CF is 1 then CForth = EForth applies. Thus, an orthogonal conjunctive form CForth is equivalent to the orthogonal equivalence form EForth including the same disjunctions [1, 3, 10, 14, 15, 21]:

$⋀k=1mdk=⨀k=1mdk.$(9)

#### Proof of Theorem 2.3

A CF can be represented as an EF by using the following condition in (10). By using (6) for orthogonal disjunctions di,j(x) in this case the relation in (9) applies. The conjunction of two disjunctions di,j(x), that means a CF, on the left side is equivalent to the right side which illustrates the equivalence operation of the same two disjunctions. If these both disjunctions di,j(x) are orthogonal then the union of them results in 1, as shown by the last term on the right side. An equivalence-operation with 1 is to be neglected, as xi ⊙ 1 = xi applies. Consequently, the relation in (9) applies.

$di(x_)∧dj(x_)=di(x_)⊙dj(x_)⊙(di(x_)∨dj(x_))⏟=1.$(10) □

## 3 Elementary operations of two conjunctions

In this section the elementary operations (intersection, union, difference-building) of conjunctions, which are deduced out of the set theory due to the isomorphism, are defined for the switching algebra. These formulas for different operations of conjunctions specify the order in which the variables of the given conjunction have to be linked. That means, if a variable is displayed negated, the corresponding literal of the given conjunction must be negated at this point. The number of variables in their respective conjunction is defined by n or n′.

#### Theorem 3.1

(Intersection of two conjunctions). The intersection of any two conjunctions ci,j(x) with n, n′ ∈ ℕ is calculated by:

$ci(x_)∧cj(x_)=⋀i=1nxi∧⋀j=1n′xj.$(11)

#### Theorem 3.2

(Union of two conjunctions). The union of any two conjunctions ci,j(x) with n, n′ ∈ ℕ is given by:

$ci(x_)∨cj(x_)=⋀i=1nxi∨⋀j=1n′xj.$(12)

#### Theorem 3.3

(Difference-building of two conjunctions). The difference-building of a conjunction cm(x) as minuend and another conjunction cs(x) as subtrahend with n, n′ ∈ ℕ is calculated by following equation, which is deduced out of the set theory [4, 13]. That means, for the difference of two sets MS it applies M which is transferred to the switching algebra due to the isomorphism. Consequently, the difference-building of two conjunctions is the intersection of the minuend and the complement of the subtrahend. By building the difference several conjunctions arise which are not disjointed (orthogonal) to each other.

$cm(x_)−cs(x_)=⋀m=1nxm−⋀s=1n′xs=⋀m=1nxm∧⋀s=1n′xs¯=⋀m=1nxm∧⋁s=1n′x¯s.$(13)

## 4 Orthogonalizing difference-building

The technique of orthogonalizing difference-building ⊖ is used to calculate the orthogonal difference of two conjunctions (ci,j(x)) whereby the result is orthogonal. This method is generally valid and equivalent to the usual method of difference-building [6, 7, 8, 9]. Orthogonalizing difference-building ⊖ is the composition of two calculation steps - the difference-building and the subsequent orthogonalization.

#### Definition 4.1

(Orthogonalizing difference-building). Orthogonalizing difference-buildingcorresponds to the removal of the intersection which is formed between the minuend conjunction cm(x) and the subtrahend conjunction cs(x) from the minuend cm(x), which means cm(x) - (cm(x) ∧ cs(x)); the result is orthogonal. Orthogonalizing difference-building of two conjunctions with n, n′ ∈ ℕ is defined as:

$cm(x_)⊖cs(x_)=⋀m=1nxm⊖⋀s=1n′xs:=⋀m=1nxm∧⋁s=1nj′x¯sj==(x1⋅…⋅xn−1⋅xn)m∧(x¯1j∨x1j⋅x¯2j∨…∨x1j⋅…⋅x(n−1)j⋅x¯nj′)s.$(14)

In this case, the formula $\begin{array}{}\left(\underset{i=1}{\overset{{n}_{j}}{\bigvee }}{\overline{x}}_{{i}_{j}}={\overline{x}}_{{1}_{j}}\vee {x}_{{1}_{j}}{\overline{x}}_{{2}_{j}}\vee \dots \vee {x}_{{1}_{j}}{x}_{{2}_{j}}\cdot \dots \cdot {\overline{x}}_{{n}_{j}}\right)\end{array}$ from [10] is used to describe the orthogonalizing difference-building in a mathematically easier way, where x1j, x2j, …, xnj are literals of the subtrahend conjunction cs(x).

This method is explained by the following example and description. Additionally, this example is also illustrated in a K-map (Figure 1).

Fig. 1

Example 4.2 in a K-map

#### Example 4.2

A subtrahend (cs(x) = x1 x2 x4) is subtracted from a minuend (cm(x) = x3). It is a result of several conjunctions (1st cube, 2nd cube, 3rd cube) which are orthogonal to each other and cover all of the remaining 1s.

$x3⏟Minuend⊖x1x2x4⏟Subtrahend=x¯1x3⏟1stcube∨x1x¯2x3⏟2ndcube∨x1x2x3x¯4⏟3rdcube⏟orthogonaldifference.$

• first literal of the subtrahend, here x1, is complemented and builds the intersection with the minuend, here x3. Consequently, the first conjunction of the difference is 1x3.

• Then the second literal, here x2, is complemented and forms the intersection with the minuend, and the first literal x1 of the subtrahend is built. Therefore, the second conjunction is x12x3.

• The next literal, here x4, is complemented and forms the intersection with the minuend, and the first literal x1 and second literal x2 of the subtrahend is built. Thus, the third term of the difference is x1x2x34.

• This process is continued until all literals of the subtrahend are singly complemented and linked by building the intersection with the minuend in a separate conjunction.

## 5 Orthogonalizing ORing

By a further novel technique ‘orthogonalizing ORing $◯\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\vee$’, which is based on the orthogonalizing difference-building ⊖, the union of two conjunctions (ci,j(x)) can be calculated of two conjunctions (ci,j(x)) whereby the result is orthogonal.

#### Definition 5.1

(Orthogonalizing ORing). The intersection of a conjunction cs1(x), called as the first summand, and a second conjunction cs2(x), called as the second summand, is removed by the methodfrom the first summand cs1(x), and the second summand cs2(x) is linked to that subtraction by a disjunction; the result is orthogonal. This procedure is labeled as orthogonalizing ORing and is defined with n, n′ ∈ ℕ as:

$cs1(x_)◯∨cs2(x_)=⋀s1=1nxs1◯∨⋀s2=1n′xs2:==[⋀s1=1nxs1⊖⋀s2=1n′xs2]∨⋀s2=1n′xs2==[⋀s1=1nxs1∧⋁s2=1nj′x¯s2j]∨⋀s2=1n′xs2==[(x1⋅…⋅xn)s1∧(x¯1j∨…∨x1j⋅…⋅x¯nj′)s2j]∨(x1⋅…⋅xn′)s2.$(15)

This method of orthogonalizing ORing is explained by the Example 5.2 which is also illustrated in a K-map (Fig. 2).

Fig. 2

Example 5.2 in a K-map

#### Example 5.2

The orthogonalizing ORing is built of two conjunctions cs1(x) = 2 and cs2(x) = x1x3 by using Eq. (15).

$x¯2⏟1stsum.◯∨x1x3⏟2ndsum.=x¯1x¯2⏟1stcube∨x1x¯2x¯3⏟2ndcube∨x1x3⏟2ndsum.⏟disjointedconjunctions.$

It is a result of several conjunctions or cubes which are disjointed to each other in pair (Fig. 2). The following points explain this unprecedented technique:

• The first literal of the second summand cs2(x), here x1, is complemented and ANDed to the first summand cs1(x). Consequently, the first conjunction of the orthogonal result arises, 12.

• Then the second literal of the second summand cs2(x) is complemented, here x3, and ANDed with the first literal of the second summand cs2(x) to the first summand cs1(x). Therefore, the second conjunction of the orthogonal result is developed, x123.

• This process is continued until all literals of the second summand cs2(x) are singly complemented and linked by ANDing to the first summand cs1(x) in a separate term.

• At last the second summand cs2(x) is added to the heretofore calculated conjunctions.

By swapping the position of the summands, the result changes as shown in the following:

$x1x3◯∨x¯2=x1x2x3∨x1x¯2.$

However, both solutions are equivalent because the same 1s are covered. They only differ in the form of their coverage. But in order to represent an orthogonal result with a fewer number of conjunctions, the conjunction with more literals has to be accepted as the first summand. That is possible because orthogonalizing ORing has the property of commutativity. By the mathematical induction the general validity of Eq. (15) is given in Proof 1.

The number of conjunctions in the result, called nx, corresponds to the number of literals presented in the second summand cs2(x) and not presented in the first summand cs1(x) at the same time; in addition, a 1 is added to nx, which stands for the second summand cs2(x) as the last linked term. It applies:

$nx+1withnx∈N.$(16)

Furthermore, the number of possible results, which primarily depends on nx, can be charged by:

$nx!fornx>0.$(17)

Depending on the starting literal the result may differ. There are many equivalent options which only differ in the form of their coverage. This novel technique contains the composition of two calculation procedures – the union ‘∨’ and the subsequent orthogonalization. The result out of orthogonalizing ORing is orthogonal in contrast to the result out of the usual method of union ∨. Both results are different in their representations but cover the same 1s. Hereinafter, the proof of this equivalence between orthogonalizing ORing and union is exemplified. On the left side it is denoted the orthogonalizing ORing of two sets S1, S2 and on the right side the union of the same sets. Due to the axiom of absorption, the equivalence between orthogonalizing ORing and ORing is verified. The right side is the orthogonal form of the left side, which are equivalent but different in their form of coverage.

$S1◯∨S2=S1∨S2S1S¯2∨S2=S1∨S2.$(18)

#### Proof. 1

General validity of Eq. (15):

∀ (n, n′) ∈ ℕ ∖ {1},  (n, n′) ≥ n0 applies A(n, n′):

if A(n0) ∧ (∀ (n, n′) ∈ ℕ, (n, n′) ≥ n0: A(n, n′) → A((n, n′)+1)) ⇒ ∀ (n, n′) ∈ ℕ, (n, n′) ≥ n0: A(n, n′)

Basis A(n0): n0, $\begin{array}{}{n}_{0}^{\prime }=2\end{array}$

$[⋀s1=12xs1∧⋁s2=12jx¯s2j]∨⋀s2=12xs2=[(x1x2)s1∧(x¯1j∨x1jx¯2j)s2j]∨(x1x2)s2[(x1x2)s1∧(x¯1j∨x1jx¯2j)s2j]∨(x1x2)s2=[(x1x2)s1∧(x¯1j∨x1jx¯2j)s2j]∨(x1x2)s2$

Inductive step A(n, n′) → A(n + 1, n′ + 1): nn + 1 and n′ → n′ + 1

$[⋀s1=1n+1xs1∧⋁s2=1(n′+1)jx¯s2j]∨⋀s2=1n′+1xs2=[(x1⋅…⋅xnxn+1)s1∧(x¯1j∨…∨x1j⋅…⋅x¯nj′∨x1j⋅…⋅xnj′x¯(n′+1)j)s2j]∨∨(x1⋅…⋅xn′⏟x(n′+1))s2[(⋀s1=1nxs1⋅xn+1)∧(⋁s2=1nj′x¯s2j∨x1j⋅…⋅xnj′x¯(n′+1)j)]∨⋀s2=1n′xs2⋅x(n′+1)=[(x1⋅…⋅xn⏟xn+1)s1∧(x¯1j∨…∨x1j⋅…⋅x¯nj′⏟∨x1j⋅…⋅xnj′x¯(n′+1)j)s2j]∨∨(x1⋅…⋅xn′⏟x(n′+1))s2[(⋀s1=1nxs1⋅xn+1)∧(⋁s2=1nj′x¯s2j∨x1j⋅…⋅xnj′x¯(n′+1)j)]∨⋀s2=1n′xs2⋅x(n′+1)=[(⋀s1=1nxs1⋅xn+1)∧(⋁s2=1nj′x¯s2j∨x1j⋅…⋅xnj′x¯(n′+1)j)]∨⋀s2=1n′xs2⋅x(n′+1)$ □

## 5.1 Orthogonalizing ORing between a DF and a conjunction

The orthogonalizing ORing of an orthogonal DF f(x)orth and a conjunction cs2(x) can be reached by Eq. (19). With ls1 ∈ ℕ+ as the number of conjunctions in the given function the following applies:

$fs1(x_)orth◯∨cs2(x_)=⋁i=1ls1⋀s1=1nxi,s1◯∨⋀s2=1n′xs2:==⋁i=1ls1[⋀s1=1nxi,s1⊖⋀s2=1n′xs2]∨⋀s2=1n′xs2==⋁i=1ls1[⋀s1=1nxi,s1∧⋁s2=1nj′x¯s2j]∨⋀s2=1n′xs2.$(19)

Since the general validity for orthogonalizing ORing is given, there is no need to prove the general validity for Eq. (19) in this case. As shown in Example 5.3 the use of (19) is illustrated.

#### Example 5.3

Let DF f1(x) = x1x2x3 and a conjunction c1(x) = 2 3. The orthogonalizing ORing between both has to be calculated.

In this case the orthogonal form of f1(x) has to be calculated by Eq. (15) otherwise by Eq. (36):

$x1◯∨x2x3=x1x¯2∨x1x2x¯3∨x2x3.$

The orthogonal form is generated:

$f1(x_)orth=x1x¯2∨x1x2x¯3∨x2x3.$

Next step is the application of Eq. (19). Each conjunction of f1(x)orth is combined by orthogonalizing ORing with c1(x). However, the adding of the second summand at last is fulfilled after each combining step.

$f1(x_)orth◯∨c1(x_)=(x1x¯2∨x1x2x¯3∨x2x3)◯∨x¯2x¯3=x1x¯2x3∨x1x2x¯3∨x2x3∨x¯2x¯3.$

In the K-map on the left side the cubes of f1(x) and c1(x) are represented, and on the K-map on the right side the result after the procedure of orthogonalizing ORing is illustrated (Fig. 3).

Fig. 3

Before and after the process of orthogonalizing ORing

## 5.2.1 Postulates

The following postulates are necessary for getting correct results after each operation of orthogonalizing ORing.

• If two conjunctions are already orthogonal to each other (cs1(x) ⊥ cs2(x)) the result corresponds to the disjunction of both conjunctions:

$cs1(x_)◯∨cs2(x_)=cs1(x_)∨cs2(x_).$(20)

• If the first conjunction is the subset of the second conjunction (cs1(x) ⊂ cs2(x)) it follows:

$cs1(x_)◯∨cs2(x_)=cs1(x_).$(21)

• and in the reverse case (cs2(x) ⊂ cs1(x)) it follows:

$cs1(x_)◯∨cs2(x_)=cs1(x_).$(22)

## 5.2.2 Axioms for variables

The following rules apply for the linking of variables and constants.

• The orthogonalizing ORing of a variable and constant 0 results in the variable itself (23) and the orthogonalizing ORing of a variable and constant 1 results in 1 (24).

$xi◯∨0=xi,$(23)

$xi◯∨1=1.$(24)

• Furthermore, the orthogonalizing ORing of a variable with the same variable leads to the variable itself (25) and the orthogonalizing ORing of a variable with its negated form leads to the union of both (26). Consequently, this results in 1 at last

$xi◯∨xi=xi,$(25)

$xi◯∨x¯i=xi∨x¯i(=1).$(26)

## 5.2.3 Axioms for conjunctions

Following axioms for conjunctions are deduced out of the axioms for variables.

• The neutral element of orthogonalizing ORing is 0:

$ci(x_)◯∨0=ci(x_).$(27)

• The result of orthogonalizing ORing between 0 and a conjunction ci(x) is this conjunction ci(x) itself:

$0◯∨ci(x_)=ci(x_).$(28)

• The orthogonalizing ORing of a conjunction ci(x) with the unit-term 1 leads to 1:

$ci(x_)◯∨1=1.$(29)

• The result of orthogonalizing ORing between 1 and a conjunction ci(x) is 1:

$1◯∨ci(x_)=1.$(30)

• The result of the orthogonalizing ORing between two the same conjunctions ci(x) results to this conjunction itself:

$ci(x_)◯∨ci(x_)=ci(x_).$(31)

• The orthogonalizing ORing of a conjunction ci(x) with its complement ci(x) results in an unit-term 1:

$ci(x_)◯∨ci(x_)¯=1.$(32)

## 5.2.4 Commutativity

Commutativity is the property of operation which allows the changing of the terms in their position in such that the value of the expression will not change. As the novel method of orthogonalizing ORing is commutative, the sequence of its execution can be changed.

$c1(x_)◯∨c2(x_)=c2(x_)◯∨c1(x_).$(33)

The value of both sides are equivalent and orthogonal. They can only differ in the form of coverage. The following Example 5.4 gives an overview about the commutative property.

#### Example 5.4

$x1◯∨x2x¯3=x2x¯3◯∨x1x1x¯2∨x1x2x3∨x2x¯3=x¯1x2x¯3∨x1.$

The left side differs only in the form of coverage in contrast to the right side, which is shown in the corresponding K-maps (Fig. 4). Both sides consist of disjointed cubes.

Fig. 4

Left and right side of Ex. 5.4

## 5.2.5 Associativity

Associativity is the property of an operation which allows the rearranging of the parentheses in such that the value of the expression will not change. As orthogonalizing ORing is associative, the position of the parentheses can be changed.

$(c1(x_)◯∨c2(x_))◯∨c3(x_)=c1(x_)◯∨(c2(x_)◯∨c3(x_)).$(34)

The value of both sides are equivalent and orthogonal. Only the form of their coverage can be different. Following Example 5.5 illustrates this characteristic of associativity.

#### Example 5.5

$(x1◯∨x2x¯3)◯∨x¯1x2=x1◯∨(x2x¯3◯∨x¯1x2)(x1x¯2∨x1x2x3∨x2x¯3)◯∨x¯1x2=x1◯∨(x1x2x¯3∨x¯1x2)x1x¯2∨x1x2x3∨x1x2x¯3∨x¯1x2=x1∨x1∨x¯1x2x1x¯2∨x1x2x3∨x1x2x¯3∨x¯1x2=x1∨x¯1x2.$

Both sides are homogeneous and orthogonal as shown in the K-maps in Figure 5. They only differ in their form of coverage.

Fig. 5

Left and right side of Ex. 5.5

## 5.2.6 Distributivity

The distributive property of an operation allows the exclusion of the same term. That means, that a term can be factored out. Hereby, the orthogonality of both sides has to be insisted. In this case, the distributive law for ANDing out applies for left and right side.

$c1(x_)⋅(c2(x_)◯∨c3(x_))=(c1(x_)⋅c2(x_))◯∨(c1(x_)⋅c3(x_)).$(35)

The validity of the distributive property is given by the following proof:

$c1(x_)⋅(c2(x_)c3(x_)¯∨c3(x_))=c1(x_)⋅c2(x_)⋅c3(x_)¯∨c1(x_)⋅c3(x_)c1(x_)⋅c2(x_)⋅c3(x_)¯∨c1(x_)⋅c3(x_)=c1(x_)⋅c2(x_)⋅c3(x_)¯∨c1(x_)⋅c3(x_).$

Both sides are equivalent and orthogonal. They can only differ in the form of their coverage. This charasteristic of distributivity is demonstrated by the following Example 5.6 whereby both sides result to the same term.

#### Example 5.6

$x1⋅(x2x¯3◯∨x1x2)=(x1⋅x2x¯3)◯∨(x1⋅x1x2)x1⋅(x¯1x2x¯3∨x1x2)=x1x2x¯3◯∨x1x2x1x2=x1x2.$

## 6 Disjointed sum of products

Based on this technique of orthogonalizing ORing a novel Equation (36) is formed which enables the orthogonalization of every disjunctive form DF. That means, this formula enables the transformation of a SOP in a homogeneous dSOP. With m ∈ ℕ as the number of conjunctions ci(x) that are included in the given DF, which has to be orthogonalized, it follows that:

$fDF(x_)orth=◯∨i=1mci(x_)=c1(x_)◯∨c2(x_)◯∨⋯◯∨cm−1(x_)◯∨cm(x_).$(36)

The explanation for Equation (36) is provided as follows:

• Orthogonalizing ORing is realized between the first and the second conjunction, c1(x) $◯\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\vee$ c2(x), by Eq. (15). After that, orthogonalizing ORing is calculated between the result of (c1(x) $◯\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\vee$ c2(x)) and the third conjunction, (c1(x) $◯\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\vee$ c2(x)) $◯\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\vee$ c3(x), by Eq. (19).

• This procedure is continued until the last conjunction cm(x).

The general validity of (36) is proven by the following mathematical induction:

#### Proof. 2

• ∀ (m) ∈ ℕ,   (m) ≥ m0 applies A(m):

$◯∨i=1mci(x_)=c1(x_)◯∨c2(x_)◯∨⋯◯∨cm(x_).$

• if A(m0) ∧ (∀ (m) ∈ ℕ, mm0: A(m) → A(m + 1)) ⇒ ∀ (m) ∈ ℕ, mm0: A(m)

• Basis A(m0): m0 = 2

$◯∨i=12ci(x_)=c1(x_)◯∨c2(x_)c1(x_)◯∨c2(x_)=c1(x_)◯∨c2(x_).$

• Inductive step A(m) → A(m + 1): mm + 1

$◯∨i=1m+1ci(x_)=c1(x_)◯∨c2(x_)◯∨⋯◯∨cm(x_)◯∨c(m+1)(x_)◯∨i=1mci(x_)◯∨c(m+1)(x_)=c1(x_)◯∨c2(x_)◯∨⋯◯∨cm(x_)⏟◯∨i=1mci(x_)◯∨c(m+1)(x_).$

□

The orthogonal result may differ depending on the order of the conjunctions because of the property of commutativity. Thus, the conjunctions can be changed if necessary. However, all solutions are equivalent and orthogonal. In Example 6.1 it is given an overview of the use of Eq. (36)

#### Example 6.1

Function f2(x) = x3x1x2 3 has to be orthogonalized by Eq. (36).

$f2(x_)orth=x3◯∨x1◯∨x2x¯3=(x¯1x3∨x1)◯∨x2x¯3=x¯1x3∨x1x¯2∨x1x2x3∨x2x¯3.$

Function f2(x)orth is the orthogonal form of function f2(x), illustrated in the K-maps (Fig. 6).

Figure 6

f2(x), f2(x)orth and $\begin{array}{}{f}_{2}\left(\underset{_}{x}{\right)}_{sort}^{orth}\end{array}$ in K-maps

By rearrangement of the order of the consisting conjunctions of a DF we obtain fewer number of conjunctions in the derived orthogonal form. This procedure of sorting is carried out from large to small. That means, it takes place from conjunctions of higher number of variables to conjunctions of fewer number of variables. The following Example clarifies this advantage of sorting.

#### Example 6.2

Function f2(x) = x3x1x2 3 has to be orthogonalized after resorting.

$f2(x_)sortorth=x2x¯3◯∨x3◯∨x1=(x2x¯3∨x3)◯∨x1=x¯1x2x¯3∨x¯1x3∨x1.$

The orthogonalized form of the sorted DF contains fewer number of conjunctions (see Fig. 6).

The analysis of a measurement, as shown in Fig. 7, gives an overview of the comparison of the orthogonalization process depending on sorting. The orthogonalization of unsorted DF called as ORTH[∨] and the sorted of the same DFs called as sortORTH[∨] are compared. Thereby, 50 different functions with five conjunctions were orthogonalized by the novel technique with regards to the tuple length (number of variables) which runs from 1 to 50. Out of these 50 calculations per number of variables the number of conjunctions were determined, from which an average value was calculated for each tuple length. These range of averages were subsequently plotted in the diagram to show the deviation of sortORTH[∨] from ORTH[∨]. This comparison illustrates a reduction of conjunctions in the orthogonal form when the DF was sorted before. This reduction is approximately 19% on average (see Table 2). Additionally, this feature allows the reducing of operation for subsequent calculation steps. Hereby, the relation in (37) is confirmed by the comparison in Fig. 7. The number of conjunctions of an orthogonalized DF $\begin{array}{}\left({\text{P}}_{c\left(\underset{_}{x}\right)}^{s}\left(f\left(\underset{_}{x}{\right)}_{sort}^{orth}\right)\right)\end{array}$ which was sorted before (sortORTH[∨]) is smaller than the number of conjunctions of the orthogonalized DF (Pc(x)(f(x)orth)) which was not sorted before (ORTH[∨]):

$Pc(x_)s(f(x_)sortorth)≤Pc(x_)(f(x_)orth).$(37)

Fig. 7

Average number of conjunctions in the orthogonal result

Table 2

Average number of conjunctions in the orthogonal result

## 7 Conclusion

This work showed a novel technique for building a union of disjointed conjunctions which is called as orthogonalizing ORing. Its results are orthogonal. Orthogonalizing ORing is used to calculate the orthogonal form of building the union of two conjunctions. This linking technique replaces two calculation steps - building a union and the subsequent orthogonalization - by one step. Orthogonalizing ORing is valid in general, which was proven by the mathematical induction, and is also equivalent to the usual method of union ∨. Additionally, postulates related to commutativity, distributivity and associativity and axioms for this method are also defined. Furthermore, every Boolean function of disjunctive form or every Sum of Products, respectively, can easily be orthogonalized mathematically by a novel equation which is based on this linking technique of orthogonalizing ORing. By this orthogonalization a disjointed Sum of Products can be reached in a simpler way. The general validity was also proven by the mathematical induction. An additional step of sorting before the step of orthogonalization achieves a reduction of approximately 19% of the number of conjunctions in the orthogonal result. This feature was illustrated by a measurement whereby the orthogonalization took place before and after sorting.

## References

• [1]

Bochmann, D., Binäre systeme - ein boolean buch, LiLoLe-Verlag, Hagen, Germany, 2006. Google Scholar

• [2]

Bochmann, D. and Posthoff, Ch., Binäre dynamische systeme, Akademie-Verlag, Berlin, DDR, 1981. Google Scholar

• [3]

Bochmann, D., Zakrevskij, A. D., and Posthoff, Ch., Boolesche gleichungen. theorie - anwendungen - algorithmen, VEB Verlag Technik, Berlin, DDR, 1984. Google Scholar

• [4]

Bronstein, I.N., Musiol, G., Mühlig, H., and Semendjajew, K.A., Taschenbuch der mathematik, Harri Deutsch Verlag, Frankfurt am Main, Thun, Germany, 1999. Google Scholar

• [5]

Bruni, R., On the Orthogonalization of Arbitrary Boolean Formulae, Journal of Applied Mathematics and Decision Sciences (2005). Google Scholar

• [6]

Can, Y., Neue Boolesche Operative Orthogonalisierende Methoden und Gleichungen, FAU University Press, Erlangen, Germany, 2016. Google Scholar

• [7]

Can, Y. and Fischer, G., Orthogonalizing Boolean Subtraction of Minterms or Ternary-Vectors, Acta Physica Polonica A. Special Issue of the International Conference on Computational and Experimental Science and Engineering (ICCESEN 2014) 128 (2015), no. 2B, B–388. Google Scholar

• [8]

Can, Y. and Fischer, G., Boolean Orthogonalizing Combination Methods, Fifth International Conference on Computational Science, Engineering and Information Technology (CCSEIT 2015) (Vienna, Austria), 23-24 May, 2015. Google Scholar

• [9]

Can, Y., Kassim, H., and Fischer, G., New Boolean Equation for Orthogonalizing of Disjunctive Normal Form based on the Method of Orthogonalizing Difference-Building, Journal of Electronic Testing. Theory and Applicaton (JETTA) (April, 2016). Google Scholar

• [10]

Crama, Y. and Hammer, P. L., Boolean Functions. Theory, Algorithms, and Applications, Cambridge University Press, New York, USA, 2011. Google Scholar

• [11]

Kempe, G., Tupel von TVL als Datenstruktur für Boolesche Funktionen, Dissertation, Freiberg, Germany, 2003. Google Scholar

• [12]

Kühnrich, M., Ternärvektorlisten und deren Anwendung auf binäre Schaltnetzwerke, Dissertation, Karl-Marx-Stadt (Chemnitz), DDR, 1979. Google Scholar

• [13]

Popula, L., Mathematik für Ingenieure und Naturwissenschaften, Viewer + Teubner Verlag | Springer Fachmedien Wiesbaden GmbH, Wiesbaden, Germany, 2011. Google Scholar

• [14]

Posthoff, Ch., Bochmann, D., and Haubold, K., Diskrete Mathematik, BSB Teubner, Leipzig, DDR, 1986. Google Scholar

• [15]

Posthoff, Ch. and Steinbach, B., Logikentwurf mit XBOOLE. Algorithmen und Programme., Verlag Technik GmbH, Berlin, Germany, 1991. Google Scholar

• [16]

Steinbach, B. and Dorotska, Ch., Orthogonal Block Building Using Ordered Lists of Ternary Vectors, Freiberg University of Mining and Technology (Freiberg, Germany), 2000. Google Scholar

• [17]

Steinbach, B. and Dorotska, Ch., Orthogonal Block Change & Block Building Using Ordered Lists of Ternary Vectors, Freiberg University of Mining and Technology (Freiberg, Germany), 2002. Google Scholar

• [18]

Steinbach, B. and Dorotska, Ch., Orthogonal Block Change & Block Building using a Simulated Annealing Algorithm, Conference on The Experience of Designing and Application of CAD Systems in Microelectronics CADSM 2003, 2003. Google Scholar

• [19]

Steinbach, B. and Posthoff, Ch., An Extended Theory of Boolean Normal Forms, Proceedings of the 6th Annual Hawaii International Conference on Statistics, Mathematics and Related Fields (Honolulu, Hawaii), p.1124-1139, 2007. Google Scholar

• [20]

Wuttke, H.-D. and Henke, K., Schaltsysteme. Eine automatenorientierte Einführung, Pearson Deutschland GmbH, München, Germany, 2003. Google Scholar

• [21]

Zander, H. J., Logischer Entwurf binärer Systeme, Verlag Technik, Berlin, DDR, 1989. Google Scholar

## About the article

Accepted: 2018-02-20

Published Online: 2018-04-26

Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 392–406, ISSN (Online) 2391-5455,

Export Citation