Unique builders for classes of matrices

Basic matrices are defined which provide unique building blocks for the class of normal matrices which include the classes of unitary and Hermitian matrices. Unique builders for quantum logic gates are hence derived since a quantum logic gates is represented by, or is said to be, a unitary matrix. An efficient algorithm for expressing an idempotent as a unique sum of rank $1$ idempotents with increasing initial zeros is derived. This is used to derive a unique form for mixed matrices. A number of (further) applications are given: for example (i) $U$ is a symmetric unitary matrix if and only if it has the form $I-2E$ for a symmetric idempotent $E$, (ii) a formula for the pseudo inverse in terms of basic matrices is derived. Examples for various uses are readily available.


Introduction
Basic matrices are defined and it is shown that any normal matrix is the product of basic commuting matrices and that the product is unique apart from the order. A matrix B is normal when BB * = B * B where B * denote the complex conjugate transposed of B. The class of normal matrices include the classes of unitary matrices (U U * = I), and Hermitian, also called self-adjoint, matrices (H * = H). These occur in many applications: for example quantum logic gates are represented by unitary matrices and their properties and applications depend ultimately on the structure of unitary matrices.
Each basic matrix itself is a product of minimal basic matrices with the same eigenvalue. A basic matrix is expressed in terms of a symmetric idempotent matrix; an idempotent E occurring in the expression B as a product of basic matrices has the property that B acts on E by BE = αE and the eigenvalue α occurs with multiplicity equal to the rank of E.
An efficient algorithm is given for expressing a rank r idempotent matrix as a sum of r rank 1 (pure) orthogonal idempotents with increasing initial zeros and such an expression is unique. From this a unique expression for a mixed matrix as a sum of rank 1 idempotents of this type is derived.
A quantum logic gate is represented by a unitary matrix and these gates are the basis for quantum information theory, see for example [3]. Indeed a quantum logic gate is often stated to be a unitary matrix itself. A quantum logic gate is thus a unique, apart from order, product of basic logic gates and these basic logic gates are building blocks for quantum logic gates in general.
Examples are readily available and applications are given throughout: It is shown that a unitary matrix U is symmetric if and only if it has the form U = I − 2E for a symmetric idempotent E. An easy formula for the roots and powers of the matrices follows directly from its expression as a product of basic matrices; powers and roots are given explicitly in terms of basic matrices. A formula for the pseudo inverse is immediate.
Expressions for well-known quantum gates (such as Pauli, Hadamard gates) as products of unique basic matrices are explicitly derived in section 4.
Comparisons may be made with the famous 1D factorization theorem of Belevich and Vaidyanathan which derives building blocks for 1D paraunitary matrices, [5] pp. 302-322. The basic matrices derived here for building normal matrices are influenced by methods in [2] for constructing generators/builders for multidimensional paraunitary matrices 1 , and in particular for constructing paraunitary non-separable (entangled) matrices.
A summary of the main results is given in Section 1.1. Further notation from Section 1.2 may be consulted as required.

Summary
Let B be a normal matrix. Then B is a product B = k j=1 (I − E j + α j E j ) the α j are distinct and S = {E 1 , E 2 , . . . , E k } is an orthogonal symmetric set of idempotents. Each (I − E j + αE j ) is termed a basic matrix and the product is unique apart from the order of these commuting basic matrices.
B acts on the idempotents by BE j = α j E j and so α j is an eigenvalue of B occurring to multiplicity equal to rank E j . When S is not a complete set then E = (I − k j=1 E k ) completes the set S and 1 occurs as an eigenvalue of B with multiplicity equal to (n − rank( k j=1 E j )) = n − ( k j=1 rank E j ) where n is the size of the matrices.
A basic matrix (I − E + αE) is the product of minimal basic matrices k j=1 (I − E j + αE j ) where the E j are mutually orthogonal, have rank 1 and each minimal basic has the same eigenvalue α as the original.
The idempotent matrix E is of the form E = t j=1 u j u * j for mutually orthogonal unit column vectors u j . Given an idempotent matrix E of rank r an efficient algorithm is given for expressing E as a sum of such rank 1 idempotents with increasing initial zeros so that the expression obtained is unique. Mixed density matrices have a particularly useful form when viewed as products of basic matrices. A mixed density matrix is defined as a convex sum of pure density matrices though not in a unique way. By writing the mixed matrix as a product of basic matrices, uniqueness is obtained and this gives a unique perspective on mixed matrices. Further each basic matrix is a unique product of ordered rank 1 basics matrices giving a unique expression for a mixed density matrix. See Section 3.1.
Unitary matrices are normal matrices with eigenvalues of the form e iθ . Quantum logic gates are represented by unitary matrices; indeed a quantum logic gate is often defined to be a unitary matrix. The expression of a unitary matrix as a unique product, except for order, of basic matrices then expresses a quantum logic gate as a unique product of basic quantum logic gates. Each basic quantum logic gate is a product of minimal quantum logic gates with the same eigenvalue.
A number of easy consequences are derived. A formula for the pseudo inverse of the matrix follows directly from the expression of the matrix as a product of basic matrices. It is shown that U is a symmetric unitary matrix if and only if U = I − 2E for a idempotent E; thus building symmetric unitary matrices from idempotent matrices is straight forward. Many of the mostly used quantum logic gates are symmetric. In Section 3.2 a useful direct formula for writing down all the roots of a normal matrix is derived giving in particular a useful direct formula for roots of a unitary matrix/logic gate. Section 4 finds expressions for common logic gates as products of basic logic gates. Section 5 discusses techniques for building normal matrices including unitary matrices from basic matrices.
There are easily constructed interesting examples; those displayed are of small size but the constructions can be efficiently applied to large size matrices.

Additional notation
Necessary background on algebra may be found in many algebra and linear algebra books. Background on quantum information theory may be found in [3].
R denotes a ring with identity I R ; the suffix R may be omitted when a particular ring R is understood. A mapping * : R → R in which r → r * , (r ∈ R) is said to be an involution on R if and only if (i) r * * = (r * ) * = r, ∀r ∈ R, (ii) (a + b) * = a * + b * , ∀a, b ∈ R, and (iii) (ab) * = b * a * , ∀a, b ∈ R. Let R be a ring with involution * . An element a ∈ R is said to be symmetric, with respect to * , if a * = a. An idempotent in R is an element E such that E 2 = E. E, F are said to be orthogonal in R if EF = F E = 0 R . The set {E 1 , E 2 , . . . , E k } is said to be a complete set of orthogonal idempotents in R if each element is an idempotent, the E i are mutually orthogonal and E 1 + E 2 + . . . + E k = I R . The set is further said to be symmetric if each E i is symmetric (with respect to * ).
Here we work in the field of complex numbers C although many of the results work over other systems but are not included. For a ∈ C, a * denotes the complex conjugate of a and then A * denotes the complex conjugate transposed of A for A ∈ C n×m or A ∈ C n . Now I n denotes the identity n × n matrix; the suffix n will be omitted when the size is clear. An n × n matrix B is said to be a normal matrix in C if and only if BB * = B * B. An n × n matrix is unitary if and only if U U * = I n . An n × n matrix is Hermitian (or self-adjoint) if and only if H * = H. Thus unitary and Hermitian matrices are normal matrices. Unitary matrices by their definition are invertible but an Hermitian matrix may not be invertible.
Column vectors u, v are orthogonal if u * v = 0; n × n matrices are orthogonal if AB * = 0 n×n = BA * ; in all cases here orthogonality will refer to symmetric matrices. For a column n × 1 vector v = 0, vv * is a symmetric n × n matrix and the matrix is necessarily of rank 1. When v is a unit vector (v * v = 1) then vv * is an idempotent matrix as then vv * (vv * ) * = vv * vv * = vv * . Suppose v, w are orthogonal vectors. Then the matrices vv * and ww * are orthogonal matrices as vv * ww * = 0 n×n since v * w = 0.  This can be restated as follows: The v i consist of the columns of P and the α i are the diagonal entries of D in Proposition 2.1 and are eigenvalues of A. Now {v 1 , v 2 , . . . , v n } is an orthonormal basis for C n . Define E i = v i v * i . Then {E 1 , E 2 , . . . , E n } is an orthogonal symmetric set of idempotents. The set is also a complete set of idempotents for if not then E = I − n i=1 E i = 0 and E is orthogonal to each of the E i ; this would give a linearly independent set of n + 1 vectors in the n-dimensional space C n . Now trA denotes the trace of the matrix A which is the sum of its diagonal elements. One nice property of an idempotent matrix is that its rank is the same as its trace, see for example [1]. Lemma 2.1 Let E, F be orthogonal symmetric idempotent n × n matrices. Then

Basic Matrices
Proof: The proofs of the first two items are straight forward. Proof of 3: It is known that rank E = trE for an idempotent matrix E, see for example [1]. Now E + F is an idempotent and so rank(E + F ) = tr(E + F ) = trE + trF = rank E + rank F . The proof of 4 is similar.
Lemma 2.1 may be generalised as follows: . . , E s } be a set of orthogonal symmetric idempotent matrices. Then is a symmetric idempotent orthogonal to each E j for j = 1, 2, . . . , s.
The following follows directly from Lemma 2.3.

Proposition 2.3 Let A be an n×n matrix with
A basic matrix is one of the form (I − E + αE) where E is an idempotent. This has eigenvalue α occurring to multiplicity equal to rank E and has eigenvalue 1 occurring to multiplicity equal to rank(

Lemma 2.4 Let E, F be orthogonal symmetric idempotent matrices. Then
Lemma 2.4 enables basic matrices with orthogonal idempotents and the same eigenvalue to be collected together.
The following is a consequence of Lemma 2.4 and Proposition 2.3. It is shown below, Proposition 2.12, that the expression for A in Proposition 2.4 is unique apart from the order of the commuting basic matrices. Note that it is required for this uniqueness that the α i are distinct; the equal α j have been gathered into one basic matrix by Lemma 2.4.

Proposition 2.4 Let A be an n×n normal matrix. Then
Each idempotent E is a sum of rank 1 idempotents but not in a unique manner. Algorithm 3.1 and Theorem 3.1 below give a method of expressing an idempotent E as the sum of rank 1 (pure) idempotents which have increasing initial zeros thus giving a unique expression for E as a sum of such pure idempotents.
One or more of the α j in Proposition 2.4 may be 0. Some of the α j may be also 1 and then where the α i are distinct and = 1. Here then the eigenvalue 1 is in 'disguise' and if so, it occurs with multiplicity equal In particular Proposition 2.4 can be applied to unitary matrices. The eigenvalues of a unitary matrix are of the form e iθ .

Proposition 2.5 Let U be an n× n unitary matrix. Then
, the α i are distinct and {E 1 , E 2 , . . . , E k } is an orthogonal set of idempotents. Moreover the eigenvalues of U are {e iθi , i = 1, 2, . . . , k} and e iθi occurs with multiplicity rank E i . Now eigenvalue 1 occurs with multiplicity n − ( k j=1 rank E j ) which may be 0. Moreover a matrix of the form A basic unitary matrix is one of the form (I − E + e iθ E).
In Proposition 2.4, A may be singular in which case the eigenvalue 0 appears to a certain multiplicity.
where the α i are distinct and = 0 and {E 1 , E 2 , . . . , E k , E} is an orthogonal set of idempotents. (The eigenvalue 1 is hidden when The following is a corollary: Notice in Proposition 2.7 and in Proposition 2.8, the eigenvalue {−1} occurs with multiplicity equal to rank E and the eigenvalue {1} occurs with multiplicity equal to rank F = n − rank E.

Examples
Using this formula, roots of U may be obtained directly -see Section 3.2.
which gives U as a product of basic unitary matrices, which are defined in Section 2.2 below.

Uniqueness
Definition A basic matrix is one of the form B = (I − E + αE) for a symmetric idempotent matrix E.
The idempotent E of B is said to be the idempotent involved in B and α is the eigenvalue involved in B.
Definition Say a basic matrix I − E + αE is a simple basic unitary matrix if the idempotent E has rank 1. Proposition 2.9 A symmetric idempotent E has rank 1 if and only if E = uu * for a unit column vector u.
Proof: If E has the form uu * for a unit column vector u, then clearly E is a symmetric idempotent of rank 1. On the other hand if E is a symmetric idempotent of rank 1 then E is of the form uu * for a unit column vector u. This is shown in for example [2] Proposition 4.5.
A minimal basic is of the form (I − E + αE) for E = uu * where u is a unit column vector and has rank 1 and α = 0.
Note Let I − E + αE and I − E + βE be basic matrices with the same idempotent. Then (I − E + αE)(I − E + βE) = I − E + αβE. A product of a minimal basic with another minimal basic with the same idempotent is another minimal basic. The minimality is expressed in terms of the rank of the idempotent involved.

Proposition 2.10 A symmetric idempotent E has rank k if and only if
for unit column mutually orthogonal vectors u i .
Proof: When k = 1 the result follows from Proposition 2.9. If the first column of E is zero then the first row of E is zero and the result follows by induction. Suppose then the first column w of E is non-zero and define u = w/|w| which is a unit column vector. Then F = uu * is an idempotent of rank 1. Let E 1 = E−F . Then E 1 = E−F is an idempotent symmetric matrix orthogonal to E which has first column, and hence first row, consisting of zeros. Let A be the (n − 1) × (n − 1) matrix with first column and first row of E 1 omitted. Since F, E 1 are orthogonal it follows that rank F + rank E 1 = rank(F + E 1 ) = rank E and so E − F has rank(k − 1). Thus A is rank (k − 1) symmetric idempotent matrix. The result then follows by induction.
Thus a basic unitary matrix is of the form (I − E + αE) where E = u 1 u * 1 + u 1 u * 2 + . . . + u k u * k for unit column mutually orthogonal vectors u i . When k = 1 the basic unitary matrix is what is termed a simple basic unitary matrix and its idempotent is of the form uu * .

Proposition 2.11
Suppose the space W is generated by the unit orthogonal columns vectors {w 1 , w 2 , . . . , w k } and by the unit orthogonal column vectors {v 1 , v 2 , . . . , v k }. Then Proof: Now In other words Denote the matrix on the right (involving α ij ) by U . Then Then by (1), Thus U * U = I k and hence U U * = I k . Therefore by (2) (w 1 , w 2 , . . . , w k ) is a non-zero idempotent with eigenvalue 1 of multiplicity (n − r).
By Lemma 2.4 a product basic matrices using orthogonal idempotents with the same eigenvalues can be collected together into a basic matrix.
where the F j are mutually orthogonal idempotents and the α j are all different. We now show that except for the order such an expression is unique. Each F j in the product has the form w 1 w * 1 + w 2 w * 2 + . . . + w k w * k where k is the rank of F j and is also the multiplicity of the corresponding eigenvalue α j .
where the E j are orthogonal idempotents and the α j are distinct and where the F l are orthogonal idempotents and the β j are distinct. Then s = k and by reordering F t = E t , α t = β t .
Proof: Now B has eigenvalues α j occurring with multiplicity rank E j and eigenmatrix E j and looking at it another way has eigenvalues β k occurring with multiplicity rank F k . Thus α 1 must equal β k for some k. We may assume this k = 1 by reordering. Then rank E 1 = rank F 1 . Now for orthogonal unit v j , (t = rank E 1 = rank F 1 ) and F 1 = w 1 w * 1 + w 2 w * 2 + . . . + w t w * t for orthogonal unit vectors w j . Now by Proposition 2.11, E 1 = F 1 . Similarly by reordering if necessary E j = F j , α j = β j in general for j = 1, 2, . . . , k and of necessity s = k. 2

Pure and mixed idempotents
In order to obtain a normal matrix as a unique product of basic matrices it is necessary to collect the basics with the same eigenvalue, see Propositions 2.4 and 2.12. An idempotent is not a unique sum of idempotents of rank 1. An idempotent of rank 1 is often described as a pure idempotent by analogy with the mixed matrix in quantum theory, see Section 3.1 below.
It is now shown that an idempotent matrix E of rank r may be written as the sum of special idempotents of rank 1 in a unique way and an efficient algorithm is given.
Let E be an n× n idempotent matrix of rank r. Then (I − E) is an idempotent matrix of rank (n− r). Also E has eigenvalues {1, 0}. Now E.E = E and so the multiplicity of the eigenvalue 1 is ≥ r since E has rank r. Also E.(I − E) = 0 = 0(I − E) and so the multiplicity of the eigenvalue 0 is ≥ (n − r) as (I − E) has rank (n − r). Hence the multiplicity of the eigenvalue 1 of E is exactly r and the multiplicity of the eigenvalue 0 of E is exactly (n − r).
The following gives E as a sum of rank 1 idempotents from the column space of E. Then This expression for E is not unique as any orthonormal basis for the column space of E may be used. An efficient algorithm for finding E as a sum of idempotents of rank 1 is then formulated as follows: To apply the algorithm it is not necessary to know the rank of the idempotent beforehand and this comes out at the end. The algorithm is efficient and at each step just involves forming an idempotent from the first non-zero column of a matrix and subtracting this from the matrix.
Then E is a rank 2 idempotent. The first column of E is non-zero so start there and let Apply the algorithm and get in turn E 1 , E 2 , E 3 idempotents of rank 1 and E = E 1 + E 2 + E 3 where E 1 is formed using the first (non-zero) column of E and notice that the number of initial zeros increase from E 1 to E 2 to E 3 .
For a mixed matrix

Theorem 3.1 Let E be an idempotent. Then E is uniquely a sum
Proof: Algorithm 3.1 shows that has such an expression for E exists. The process involves constructing first an idempotent E 1 such that E − E 1 is an idempotent but also that st (E − E 1 ) < st (E). Now work with (E − E 1 ) and proceed until the zero matrix is obtained. Suppose where the E i and F j are rank 1 idempotents. Then r = s as this is the rank of E. Now look at the first non-zero column of E 1 which is the first non-zero column of E. From this get E 1 = F 1 and then by induction that E i = F i for i = 1, 2, . . . , r.
These results are applied below in Section 3.1 to derive a uniqiue expression for a mixed matrix as the sum of rank 1 matrices of the form given in Theorem 3.1.

Mixed matrix
. . , u k } are mutually orthogonal unit (column) vectors in C n . Complete {u 1 , u 2 , . . . , u k } to an orthogonal unit basis {u 1 , u 2 , . . . , u n } for C n . Let . . , E k , E} is a complete orthogonal set of idempotents. Basic matrices with the same p i may be collected together. Note that E = I − k j=1 E j as {E 1 , E 2 , . . . , E k , E} is a complete orthogonal set of idempotents.
It is easy to check that is the pseudo inverse of A which is useful for certain calculations.
A particular case of this is where A is a mixed density matrix A = k i=1 p i u i u * i where {u 1 , u 2 , . . . , u k } are orthogonal unit vectors and p i = 1, 0 < p i ≤ 1. It is known that this expression for A is not unique. idempotents and thep j is now a unique expression for A. Each F j is a sum sj i=1 u ij u * ij of orthogonal rank 1 idempotents and F j is of rank s j .
Note that AF j =p j F j andp j occurs with multiplicity equal to rank F j . Using Theorem 3.1 the following unique expression is obtained.

Roots
A nice formula for roots of normal matrices may now be obtained directly from their expression as a product of basic matrices. Roots of unitary matrices/quantum gates will then follow.
Powers of normal matrices are easy to obtain:

Proposition 3.2 Let the normal matrix B be expressed as a product of basic matrices by
Proof: The proof follows directly by induction and noting that the E j commute.
Let U = k j=1 (I − E j + α j E j ) be the expression of an n th root of I n as a product of basic matrices. The E j are mutually orthogonal idempotents and the α j = 0.
Then U n = k j=1 (I − E j + α n E j ) = I n . Hence α n = 1 and so α is an n th root of unity. Hence α j = e iθj where nθ j = 2πr j for r j ∈ Z, 1 ≤ r j ≤ n − 1. Thus θ j = 2πrj n and so This gives normal matrices which the n th roots of I n . The E j can be any set of orthogonal idempotents and the r j can be any integers between 1 and n − 1.

Proposition 3.3
The n th roots of I n which are normal consist of matrices of the following type: where {E 1 , E 2 , . . . , E k } is an orthogonal set of idempotents in C n×n and r j are integers satisfying 1 ≤ r j ≤ n − 1 . . , E k } is a set of orthogonal symmetric idempotent matrices in C n . If V is not complete then there exists S = {E k+1 , E k+2 , . . . , E s } such that {E 1 , E 2 , . . . , E s } is a complete orthogonal set of idempotents. The set S is not unique but we shall refer to an S as a complementary set of V and say that S completes V . In particular {I − k j=1 E j } is the minimal set which completes V .
Suppose U is the n th root of a basic matrix W and W = t j=1 (I − F j + β j F j ) is the expression for W as a product of basic matrices where the β j are all different. Let U = s j=1 (I − E j + α j E j ) be the expression for U as a product of basic matrices. Some of the (I − E j + α j E j ) may be n th roots of I n and we separate these out so that U = k j=1 (I − E j + α j E j )P where P is an n th root of I n and commutes with each E j ; reordering the E j may be necessary. Now P is a product of basic matrices involving only idempotents in a complementary set of {E 1 , E 2 , . . . , E s }.
. Now compare this with W . By uniqueness and possible reordering get that E j = F j and α n j = β j . Thus if α j = r j e iθj and β j = s j e iγj then r n j = s j and nθ j = γ j + 2πr j . Now n j = 2πr j + s j for 0 < s j < 2π and so e inθj = e isj . However a normal matrix has many n th roots and we now proceed to directly find all these.
E j may not be I and in this case define F = (I − E j ). Then {E 1 , E 2 , . . . , E k , F } is a complete orthogonal set of idempotents. Now I = I − F + e 0 F and we need to attach this to U : U = k j=1 (I−E j +α j E j )(I−F +e 0 F ). Suppose n th roots of U are required. Apply the usual way of obtaining an n th root of unity in C and write: U = k j=1 (I −E j +α j e i2rj π E j )(I −F +e 2rπ F ) for positive integers r j , r from 0 to n − 1. Then U 1 n = { k j=1 (I − E j + α 1 n j e i θ j +2r j π n E j )(I − F + e 2rπ n F )} and this is for any r j , r between 0 and n − 1. Here α 1 n refers to real n th of the the modulus of α. This gives all the n th roots of U and they are expressed in a unique form as a product of basic matrices. If the expression for U has k + 1 basic matrices, including F = 0, then get n k+1 different n th roots of U .

Examples:
Consider the examples 2.1.

Example 1 uses the matrix
and U = I − E 2 + e π E 2 . The square roots of U are then {I − E 2 + e i π 2 E 2 , I − E 2 + e i 3π 2 E 2 } plus a product of these using any square roots of I obtained from a complementary set of {E 2 }. The only complementary set of {E 2 } is {E 1 }. A basic matrix involving E 1 , besides the identity, which is a square root of I is where k, s can be 0 or 1.

Example 2 uses the orthogonal matrix
Here {E 1 , E 2 } is a complete orthogonal set of idempotents so there is no complementary set to consider.
The third roots of U are:

Logic gates
Expressions as products of basic matrices are found for common quantum logic gates. Solving for E get E = 1 expresses H as a product of (minimal) basic unitary matrices where E is as above. H is itself a minimal basic matrix. Then {E, F } with F = I − E is a complete orthogonal set of idempotents. Now HE = −E and HF = F so H 'reverses' E and leaves F alone.
Roots of H are immediate: For example a square root of H is (I − E + e i π 2 E) = (I − E − iE). All the square roots of H may be obtained by methods of Section 3.2 and these are ( . Hence X = (I − E + e iπ E).
In this form a square root of X is easy to find and one such is (I − E + e i π 2 E).
Roots of Y-gate are easy to obtain: for example (I − E + e i π 4 E) is a fourth root and all fourth roots may be directly found by structures of Section 3.2.
The Pauli gates are themselves (minimal) basic matrices.
•  . Then CN OT = (I − E + e iπ E). E has rank 1 and F = I − E, corresponding to eigenvalue 1, has rank 3. All the square roots of CNOT are • Bell non-symmetric: The Bell non-symmetric matrix is B = 0 1 −1 0 . This has eigenvalues {i, −i}. Then it is easy to show that B = (

Build matrices
Normal matrices are built from basic matrices of the form (I − E + αE); unitary matrices or quantum logic gates are built from basic matrices of the form (I − E + e iθ E). A particular property may be required; for example a quantum logic gate which is an n th root and/or a quantum logic gate with particular eigenvalues and multiplicities may be required. Requirements may be met in a constructive manner. It is shown in Proposition 2.8 that a symmetric (unitary matrix)/(quantum logic gate) is of the form (I − 2E) for a symmetric idempotent E. This makes building symmetric unitary matrices/logic gates particularly easy. . It is clear that rank E = 2 and thus rank(I − E) = 2 also. A square root of U is (I − E + e i π 2 E) but there are three other square roots which are easily worked out using the methods developed in Section 3.2. U is not separable and roots of U are not separable.
A unitary matrix U which is also a root of I may be built by the methods of Section 3.2; in this case U n = I and U * = U n−1 . Suppose for example quantum logic gates (unitary matrices) in C 4×4 are required which are 4 th roots of I 4 = I. Clearly U = (I − E + e i 2rπ 4 E) where E is any idempotent and r ∈ {1, 2, 3}, is a basic unitary quantum gate which is a 4 th root of I. Now v 1 = 1 2 (1, 1, 1, −1) T , v 2 = 1 2 (1, 1, −1, 1) T , v 3 = 1 2 (1, −1, 1, 1) T , v 1 = 1 2 (−1, 1, 1, 1) T is an orthonormal basis for C 4 and these give an orthogonal complete set of idempotents . From these, gates which are 4 th roots of I may be formed: {(I − E 1 + e i 2r 1 π 4 E 1 )(I − E 2 + e i 2r 2 π 4 E 2 )(I − E 3 + e i 2r 3 π 4 E 3 )(I − E 4 + e i 2r 1 π 4 E 4 )} where r j ∈ {0, 1, 2, 3}. (If all r j = 0 then the identity matrix is obtained.) Normal including unitary and Hermitian matrices may naturally be built from a set of unit orthogonal vectors. Let {u 1 , u 2 , . . . , u k } be a set of orthogonal unit column vectors in C n and set E i = u i u * i . Build B = (I − E 1 + α 1 E 1 )(I − E 2 + α 2 E 2 ) . . . (I − E k + α k E k ) which is then normal. The basic matrices with the same scalars may be amalgamated by Lemma 2.4 to get a unique expression B = (I − F 1 + β 1 F 1 )(I − F 2 + β 2 F 2 ) . . . (I − F s + β s F s ) where the β i are distinct. Each F j is a sum u ij u * ij of a subset of {u 1 u * 1 , u 2 u * 2 , . . . , u k u * k }. Note that S = {u 1 , u 2 , . . . , u k } need not be a full basis for C n ; in this case {E 1 , E 2 , . . . , E k } is not a complete set of orthogonal idempotents. By letting E = I − k i=1 E i a complete set {S ∪ E} of orthogonal idempotents is obtained. To build a unitary matrix take α j = e iθj and to build a Hermitian matrix take the α j to be real. If no α i = 0 then no β j = 0 and B is invertible with inverse B −1 = (I − F 1 + β −1 1 F 1 )(I − F 2 + β −1 2 F 2 ) . . . (I − F s + β −1 s F s ). If some β j = 0, say b s = 0, then B = (I − F 1 + β 1 F 1 )(I − F 2 + β 2 F 2 ) . . . (I − F s−1 + β s−1 F s−1 )(I − F s ). In this case the pseudo inverse of B is (I − F 1 + β −1 1 F 1 )(I − F 2 + β −1 2 F 2 ) . . . (I − F s−1 + β −1 s−1 F s−1 )(I − F s ). Sets of orthogonal unit column n × 1 vectors may be obtained from n th roots of unity from which unitary matrices and logic gates for various requirements may be built; details are omitted.
Comparisons may be made with the famous 1D unique building blocks for paraunitary matrices due to Belvitch and Vaidyanathan, see [5] pages 302-322.