Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access July 3, 2020

Disjoint Sum of Products by Orthogonalizing Difference-Building ⴱ

Yavuz Can, Önder Yaz and Dietmar Fey
From the journal Open Engineering


The orthogonalization of Boolean functions in disjunctive form, that means a Boolean function formed by sum of products, is a classical problem in the Boolean algebra. In this work, the novel methodology ORTH[ⴱ] of orthogonalization which is an universally valid formula based on the combination technique »orthogonalizing difference-building ⴱ« is presented. Therefore, the technique ⴱ is used to transform Sum of Products into disjoint Sum of Products. The scope of orthogonalization will be solved by a novel formula in a mathematically easier way. By a further procedure step of sorting product terms, a minimized disjoint Sum of Products can be reached. Compared to other methods or heuristics ORTH[ⴱ] provides a faster computation time.

1 Introduction and Preliminaries

A Boolean function of n variables is defined as the mapping f (x) : {0, 1}n{0, 1}. Four normal forms of Boolean functions exist, the disjunctive normal form (DNF), conjunctive normal form (CNF), antivalence normal form (ANF) and equivalence normal form (ENF), which consist of either product terms pk(x_):=i=1nxi=x1 .. ∧ xn or sum of terms sk(x_):=i=1nxi=x1..xn(with n ≥ 1 as the number of the variables; dimension) in which variables are either negated xi or not-negated xi [1, 2]. The normal form is the canonical representation of the Boolean function. That means that all given variables are included in a product term or sum of term respectively. The reduced form, i.e. non-canonical representation of terms, are called disjunctive, conjunctive, antivalence and equivalence forms DF, CF, AF and EF. The disjunctive form is also considered as the Sum of Products (SOP) and notes as with N > 1 as the upper bound of the number of the product terms [? ].


A is the index set of the running index i. The AF is a special form of Exclusive-Or Sum of Products (ESOP) and is defined as


The orthogonality of a Boolean functions is a special attribute. A function is orthogonal if their terms have the characteristic of being disjoint in pairs in at least one variable. Thus, the following applies for the disjoint Sum of Products (dSOP):


An orthogonal representation of a SOP, that means dSOP, is characterized by product terms which are disjoint to one another in pairs [3, 4]. Consequently, the intersection of these product terms results in 0. The orthogonal representation of a DF - disjoint Sum of Products - is equal to the orthogonal form of an AF - the disjoint Exclusive-Or Sum of Products. In this case, it applies dSOP(x) = dESOP(x) [3, 4, 5]. That means, the dSOP is equivalent to dESOP consisting of the same product terms and differ only in the logical connectivity between the product terms. This relationship can be explained well with the following definition out of [6], if both product terms pi(x) and pj(x) are disjoint to each other. A SOP of two product terms can be transformed into an ESOP by:


In the special case, that both products terms are disjoint, building their conjunction results to 0. As xi 0 = xi is, following relation follows from the Eq. (4):


In this case, the left side is equal to the right side which means that a dSOP is equivalent to a dESOP. It applies dSOP(x) = dESOP(x).

In a K-map a dSOP is characterized by non-overlapping cubes (Figure 1). Special calculations can be easier solved in another form. It simplifies the handling of further calculations in applications of electrical engineering, e.g. calculation of suitable test patterns for combinational circuits for verifying feasible logical faults, which can mathematically be determined by Boolean Differential Calculus (BDC) [1, 7]. That means, the orthogonalization of a SOP facilitates the transformation into an equivalent dESOP [1, 3, 8] and this characteristic simplifies the handling of BDC especially in Ternary-Vector-List (TVL) arithmetic [3, 9, 10, 11]. Due to the restricted number of variables the terms of SOP are not priory disjoint. However, the disjoint form can be calculated by using a novel Boolean formula based on the novel combining technique of »orthogonalizing difference-building ⴱ«.

Figure 1 Difference between SOP and dSOP in a K-map

Figure 1

Difference between SOP and dSOP in a K-map

2 Method of Orthogonalization

2.1 Orthogonalizing Difference-Building ⴱ

Orthogonalizing difference-building ⴱ is the composition of two calculation steps - the usual difference-building out of the set theory and the subsequent orthogonalization, as shown in Figure 2. The result of ⴱ is orthogonal in contrast to the result out of the method difference-building. Both results are different in their representations but homogenous in their covering of 1s. They only differ in their form of coverage, whereas the method ⴱ constitutes the solution in already orthogonal form. This method ⴱ is generally valid and equivalent to the usual method of difference-building [3]. The orthogonalizing difference-building ⴱ corresponds to the removal of the intersection which is formed between the minuend product pm(x_)and the subtrahend product term ps(x_)from the minuend product term pm(x_),which means pm(x_)(pm(x_)ps(x_)).The result consists of several product terms which are pair-wise disjoint to each other. The Equation (6) applies with n, n' N as the dimension of pm(x) and ps(x). In this case, the formula (i=1njx¯i=x¯1jx1jx¯2j..x1jx2j..x¯nj)from [4] is used to describe the orthogonalizing difference-building in a mathematically easier way. The method of orthogonalizing difference-building ⴱ is demonstrated by the following Example 1.

Figure 2 ⴱ: Two procedures in one step

Figure 2

ⴱ: Two procedures in one step

Example 1: A subtrahend ps(x_)=x2x3x4is subtracted from a minuend pm(x_)=x1and it appears a result, which consist of pairwise disjoint product terms.


The explanation of Eq. (6) is given by the following points:

  1. The first literal of the subtrahend, here x2, is taken complementary and build the intersection with the minuend, here x1. Consequently, the first term of the difference is x1x2.

  2. Then the second literal, here x3, is taken complementary and the intersection with the minuend and the first literal x2 of the subtrahend is built. Therefore, the second term is x1x2x3.

  1. Following the next literal, here x4, is taken complementary and the intersection with the minuend and the first literal x2 and second literal x3 of the subtrahend is built. Thus, the third term of the difference is x1x2x3x4.

  2. This process is continued until all literals of the subtrahend are singly complemented and linked by building the intersection with the minuend in a separate term.

north as the number of product terms in the orthogonal result corresponds to the number of literals presented in the subtrahend ps(x) and are not presented in the minuend pm(x_)at the same time. Following rules must be followed to get correct results for the application of ⴱ:

  1. If the subtrahend is already orthogonal to the minuend (ps(x_)pm(x_))the result corresponds to the minuend:

  2. The difference between 0 and the subtrahend is the subtrahend itself:

  3. The result between 1 and subtrahend is the complement of the subtrahend which results in a dSOP:

  4. Thereby, the subset symbol of the set theory is transferred to switching algebra. The result between subtrahend and minuend is empty if the subtrahend is already completely contained in the minuend. If the subtrahend is the subset of the minuend (ps(x_)pm(x_)),the result is 0:


2.2 Orthogonalization of SOP

2.2.1 Mathematical Methodology

The orthogonalization of every SOP(x) consisting of at least two product terms (N > 1) can be performed by Eq. (11) which bases on the Eq. (6), which is based on the combination technique of ⴱ [3]. The order of the calculation is important. That means, the first two product terms must be calculated and then the third product term must be calculated with the result of the two product terms before, and so on. The result of dSOP(x_)can diversify depending on the starting product term. As a SOP has the characteristic of being commutative, the order of their product terms can be changed for getting results with fewer number of disjoint product terms called as North. To obtain better result is often reached by ordering the product terms from higher number of variables to fewer number of variables. Following Example 2 gives an overview about the procedure orthogonalizing by Eq. (11) and afterwards the Example 3 with an additional process of sorting.

Example 2: Function SOP1(x_)=x¯3x1x2x1x3has to be orthogonalized by Eq. (11).


Function dSOP1(x_)consists of four disjoint product terms (North = 4) and is the orthogonalized form of SOP1(x_).Both are equivalent. They only differ in their form of coverage which is illustrated in the K-maps as shown in Figure 3.

Figure 3 Comparison of SOP1(⨱), dSOP1(⨱) and sortdSOP1(⨱)

Figure 3

Comparison of SOP1(), dSOP1() and sortdSOP1()

Example 3: Now, the sorted function sortdSOP1(x_)=x1x2x1x3x¯3of Example 2 has to be orthogonalized by Eq. (11).


Function sortdSOP1(x_)is another equivalent orthogonal form of SOP1(x_)which consists of two disjoint product terms (North = 2) that is also illustrated in third K-map (Figure 3). The coverage of 1s is done by two cubes. By sorting, a minimized dSOP can be reached. The comparison of the three functions shows their equivalence. They are homogenous and only differ in their form of superimposition.

2.2.2 Algorithm

The corresponding Algorithm ORTH[ⴱ], whose pseudo code is shon in the Table 1 outlines the computational procedure of orthogonalization of a SOP according to the formula in Eq. (11). To obtain dSOP with fewer number of product terms, the sub-functions absorb() and sort() are additionally used. absorb() is a function which reduces the number of product terms of the SOP, which serves as the input of the algorithm. The reduction is achieved by by absorption of smaller product terms, which consists of higher number of variables, by larger product terms, which consists of lower number of variables if those are already covered by the larger ones (following example):

Table 1

Pseudo-Code of the Algorithm ORTH[ⴱ]

    for z=0 to N do ()
            for i=z+1 to N do ()
                    tmpSOP.ⴱtmpSOP, pi.get_p(i)
      return dSOP

The product term x1 is absorbing the other two product term. Additionally, absorb() reduces duplicated product terms to a single term which is demonstrated with the following example:


Consequently, by using absorb() the number of product terms, that have to be treated decreases. With the optionally function sort() follows the resorting of the product terms from smaller product terms to larger product terms. After proceeding these two sub-functions absorb() and sort() the process of orthogonalization ORTH[ⴱ] according to the method ⴱ is performed.

3 Comparison and Measurement

3.1 North before and after Sorting

However, to make a statement about the optimized form, the optimum minimization would have to be defined, which has not yet been clarified. Table 2 illustrates the percentage of reduced terms by the use of subsequent procedure of sorting. The procedure of sorting brings an advantage for gaining minimized dSOP. Firstly, a list of ten non-orthogonal functions in respect to N = {5, 10, 15} and dimension xn = {5, 6, . . . , 50} were created. Consequently, per each N has produced 50 different non-orthogonal SOPs. Subsequently, each SOP was orthogonalized according to the method ⴱ before and after sorting. The resulting number of product terms North in dSOP and sortNorth in sortdSOP in respect to N and xn were determined (Figure 4) and compared. The number of product terms of a sorted SOP is fewer then the unsorted SOP. It follows sortNorth(N, xn) < North(N, xn). An average value of these values was calculated for each dimension xn. Thereby, the results of the quotients of the average number of disjoint product terms were obtained. Furthermore, quotients of North and sortNorth in percentage value were gained. The minus signifies that in that case the number of North is fewer. Finally, a total average percentage value per each N was determined out of these average values (Table 2). Three in their number. Consequently, an additional procedure of sorting leads to a dSOP with fewer number of product terms. Minimalization of approximately 17% till 28% is obtained in comparison to a dSOP which has not been sorted before.

Figure 4 Average number of North and sortNorth

Figure 4

Average number of North and sortNorth

Table 2

Percentage Value of NorthsortNorth

Percentage Value [%]

3.2 Comparison in Number of Terms North

The number of the disjoint product terms North in a dSOP produced by ORTH[ⴱ] is analyzed in comparison to the heuristic ORTH[DSOP] in [5], the method ORTH[m1] in [12] and a varied form of ORTH[DSOP] as ORTH[DSOP2]. In the varied form ORTH[DSOP2] the minimization function “espresso.exe” was replaced by absorb(). The comparisions in respect to N = {2, 5, 10, 15, 20, 25} and xn = {1, 2, . . . , 50} are shown in Figure 5. The corresponding average values North¯and the ratios to the method ORTH[ⴱ] are given in Table 3. The average value North for each N are formed of 50 calculated tasks for each dimension xn. Out of these average values of North regarding to a method an average value North¯is built. The corresponding charts, as shown in the Figures 5a) - f), illustrate that the method ORTH[ⴱ] offers results with fewer number of terms North in respect to growing xn and N in comparison to the methods ORTH[m1] and ORTH[DSOP2]. The number of terms are 1.67 times fewer by ORTH[ⴱ] in comparison to ORTH[m1]. Against this, the heuristic ORTH[DSOP] provides approximately 50% fewer number of terms North in contrast to the method ORTH[ⴱ]. Therefore the relationship NorthORTH[DSOP]<NorthORTH[]<NorthORTH[m1]can be deduced from this. Probably, this benefit is given by the use of “espresso.exe” , which is a program used as heuristic logic minimizer. In contrast to our function absorb() the number of terms in the minimized result is fewer by “espresso.exe”. This step of minimization is important because a further calculation of a dSOP with fewer number of product terms needs fewer operations and is carried out by reduced computation time. In this case, a further calculation of a dSOP such as the Boolean Differential Calculus (BDC) is performed with fewer number of product terms and thus reduces the number of further operations and the number of calculation steps which will certainly affect the computation time.

Figure 5 Average number of the North in the dSOP

Figure 5

Average number of the North in the dSOP

Table 3

Average of the number of terms North and relation to ORTH[ⴱ]


3.3 Comparison in Computation Time

In this section the comparison of all four approaches relating to the computation time in respect to N = {2, 5, 10, 15, 20, 25} and xn = {1, 2, . . . , 50} as shown in the Figures 6a) - f) is given. The corresponding average values of the calculation times and the ratios to the method ORTH[ⴱ] are given in the Table 4. The computation times of method ORTH[ⴱ] is faster in comparison to the heuristic ORTH[DSOP], ORTH[m1] and the varied form ORTH[DSOP2]. The complexity class of ORTH[ⴱ] totals up to Θ(n5). The distinction is that the noval method ORTH[ⴱ] calculates the orthogonalizing difference-building ⴱ consistently no matter if two product terms are orthogonal or not. As this consideration takes place in method ORTH[m1] in [1], the computation time is likely to be deteriorated. Thus unnecessary calculations are not to be carried out in the method ORTH[ⴱ]. By the use of “espresso.exe” in ORTH[DSOP] the procedure time of orthogonalization is more slowly. This is also confirmed by replacing the function with absorb(), which is shown by the charts of ORTH[DSOP2]. Therefore, its computation time is decelerated. Due to the replacement of the minimization function by absorb() in ORTH[DSOP2] the calculation time gets faster in comparison to ORTH[DSOP]. However, it is still higher than the computation time of the novel method ORTH[ⴱ]. In summary, it has to be clarified here that the new method has faster computation time than the other approaches. ORTH[ⴱ] is approximately 1000 times faster in comparison to ORTH[DSOP], approximately 25 times faster than ORTH[DSOP2] and twice as fast than ORTH[m1]. Even if the two sub-functions absorb() and sort() are excluded, the method ORTH[ⴱ] provides computation time, which are reduced, as shown in Figure 6a) - f). The measurements are limited to the dimension xn = 50. Against this, for dimension xn > 50 similar results are expected.

Figure 6 Comparison in computation time

Figure 6

Comparison in computation time

Table 4

Average of the computation times and relation to ORTH[ⴱ]

xnin μSin μSin μSin μS

4 Summary and Conclusions

This work introduced a generally valid method of »orthogonalizing difference-building ⴱ« which is used to calculate the orthogonal difference of two product terms. Furthermore, rules for this method were explained which must be followed to get correct results. By a novel formula based on the combining technique ⴱ every Sum of Products (SOP) can easily be orthogonalized mathematically. Thus, we get disjoint Sum of Products (dSOP). A minimized dSOP can also be reached by two additional procedures of sorting and absorbing of terms before the process of orthogonalization. By sorting the product terms of a SOP are resorted from smaller number of variables to higher number of variables. This resorting brings an advantage of approximately 17% and 26% depending on N to reach minimized dSOP. The corresponding Algorithm ORTH[ⴱ]was compared to other algorithms ORTH[DSOP], ORTH[DSOP2] and ORTH[m1] in their number of product terms in the calculated dSOP and the computation time. ORTH[DSOP] determines fewer number of product terms in contrast to ORTH[ⴱ]. However, the reduction of the product terms by ORTH[ⴱ] is about 50% in contrast to ORTH[m1]. Furthermore, the novel method ORTH[ⴱ] provides approximately 1000 times faster computation in comparison to ORTH[DSOP] and is approximately 25 times faster in comparison to ORTH[m1]. The number of terms in the orthogonalized result by the method ORTH[ⴱ] can probably reduced by an additional absorption of the disjoint product terms. For that, a post-function for absorption could be developed, which retains the property of orthogonality.


[1] D. Bochmann, Binäre Systeme - Ein Boolean Buch Hagen, Germany: LiLoLe-Verlag, 2006.Search in Google Scholar

[2] B. Steinbach and C. Posthoff, “An extended theory of boolean normal forms,” in Proc. 6th Annual Hawaii International Conference on Statistics, Mathematics and Related Fields (Hawaii, USA), pp. 1124–1139, 2007.Search in Google Scholar

[3] Y. Can, Neue Boolesche Operative Orthogonalisierende Methoden und Gleichungen Erlangen, Germany: FAU University Press,1 ed., 2016.Search in Google Scholar

[4] Y. Crama and P. Hammer, Boolean Functions - Theory, Algorithms, and Applications. Cambridge, UK: Cambridge University Press, 2011.10.1017/CBO9780511852008Search in Google Scholar

[5] A. Bernasconi, V. Ciriani, F. Luccio, and L. Pagli, “New Heuristic for DSOP Minimization,” in Proc. 8th International Workshop on Boolean Problems (IWSBP) (Freiberg (Sachsen), Germany), 2008.Search in Google Scholar

[6] H. J. Zander, Logischer Entwurf binärer Systeme Berlin, DDR: Verlag Technik, 1989.Search in Google Scholar

[7] B. Steinbach, “The Boolean Differential Calculus – Introduction and Examples,” in Proc. Reed-Muller Workshop 2009 (Naha, Okinawa, Japan), pp. 107–117, 2009.Search in Google Scholar

[8] Y. Can, H. Kassim, and G. Fischer, “New Boolean Equation for Orthogonalizing of Disjunctive Normal Form based on the Method of Orthogonalizing Difference-Building,” Journal of Electronic Testing. Theory and Applicaton (JETTA) vol. 32, no. 2, pp. 197– 208, 2016.10.1007/s10836-016-5572-6Search in Google Scholar

[9] Y. Can, H. Kassim, and G. Fischer, “Orthogonalization of DNF in TVL-Arithmetic,” in Proc. 12th International Workshop on Boolean Problems (IWSBP) (Freiberg (Sachsen), Germany), 2016.Search in Google Scholar

[10] C. Dorotska and B. Steinbach, “Orthogonal Block Change & Block Building Using Ordered Lists of Ternary Vectors,” in Proc. 5th International Workshop on Boolean Problems (IWSBP) (Freiberg (Sachsen), Germany), pp. 91–102, 2002.Search in Google Scholar

[11] C. Posthoff and B. Steinbach, Logikentwurf mit XBOOLE. Algorithmen und Programme. Berlin, Germany: Verlag Technik GmbH, 1991.Search in Google Scholar

[12] C. Posthoff and B. Steinbach, Binäre Gleichungen - Algorithmen und Programme. Karl-Marx-Stadt (Chemnitz), DDR: Technische Universität Karl-Marx-Stadt, 1979.Search in Google Scholar

Received: 2019-05-21
Accepted: 2020-05-07
Published Online: 2020-07-03

© 2020 Y. Can et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Scroll Up Arrow